Efficient Java Matrix Library
ejml_org2
http://ejml.org/wiki/index.php?title=Main_Page
MediaWiki 1.35.11
first-letter
Media
Special
Talk
User
User talk
EJML Wiki
EJML Wiki talk
File
File talk
MediaWiki
MediaWiki talk
Template
Template talk
Help
Help talk
Category
Category talk
Main Page
0
1
1
2015-03-14T23:31:34Z
MediaWiki default
0
wikitext
text/x-wiki
<strong>MediaWiki has been successfully installed.</strong>
Consult the [//meta.wikimedia.org/wiki/Help:Contents User's Guide] for information on using the wiki software.
== Getting started ==
* [//www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]
* [//www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]
* [//www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]
8e0aa2f2a7829587801db67d0424d9b447e09867
3
1
2015-03-15T00:07:08Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) procedural, 2) object oriented, and 3) equations. Procedure provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. Object oriented provides a simplified subset of the core capabilities in an easy to use API, inspired by Jama. Equations is a symbolic interface, similar in spirit to Matlab and other CAS, that provides a compact way of writing equations.
|}
{|
|-
| align="center" |Version: ''v0.26''
|-
| align="center" |Date: ''September 15, 2014''
|}
f55a6d07e4af9569e8dc796eda5f5937cc175ac6
4
3
2015-03-15T01:11:31Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) procedural, 2) object oriented, and 3) equations. Procedure provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. Object oriented provides a simplified subset of the core capabilities in an easy to use API, inspired by Jama. Equations is a symbolic interface, similar in spirit to Matlab and other CAS, that provides a compact way of writing equations.
|}
{|
|- valign="top"
| width="300pt" |
{|width="280pt" border="1" style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.26''
|-
| '''Date:''' ''September 15, 2014''
|}
|-
| width="220pt" |
{| width="200pt" border="1" style="font-size:120%; text-align:center;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" style="font-size:120%; text-align:center;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [http://code.google.com/p/efficient-java-matrix-library/issues/list Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
|}
50afa1000d88a1f921e5d54ca097b4e91c186d7a
5
4
2015-03-15T01:24:23Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) procedural, 2) object oriented, and 3) equations. Procedure provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. Object oriented provides a simplified subset of the core capabilities in an easy to use API, inspired by Jama. Equations is a symbolic interface, similar in spirit to Matlab and other CAS, that provides a compact way of writing equations.
|}
{|
| colspan="2" width="500pt" align="center" |
{|width="280pt" style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.26''
|-
| '''Date:''' ''September 15, 2014''
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [http://code.google.com/p/efficient-java-matrix-library/issues/list Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
|}
</center>
dfbca1def6667ccc7c54ee2750d75ed93f48aa20
6
5
2015-03-15T02:14:23Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''object oriented'', and 3) ''equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''Object oriented'' provides a simplified subset of the core capabilities in an easy to use API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to Matlab and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" width="500pt" align="center" |
{|width="280pt" style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.26''
|-
| '''Date:''' ''September 15, 2014''
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [http://code.google.com/p/efficient-java-matrix-library/issues/list Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
{| width="500pt" |
|-
|
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
'''Object Oriented'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
|}
== Functionality ==
{| width="850pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Dense Real
* Dense Complex (Next Stable Release)
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Different Internal Formats (row-major, block)
* Unit Testing
|}
</center>
146bc93a088b89b6302adae972092d1394103b7f
7
6
2015-03-15T02:17:50Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''object oriented'', and 3) ''equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''Object oriented'' provides a simplified subset of the core capabilities in an easy to use API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to Matlab and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" width="500pt" align="center" |
{|width="280pt" style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.26''
|-
| '''Date:''' ''September 15, 2014''
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; order-collapse:collapse;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; order-collapse:collapse;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [http://code.google.com/p/efficient-java-matrix-library/issues/list Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; order-collapse:collapse;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
{| width="500pt" |
|-
|
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
'''Object Oriented'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
|}
== Functionality ==
{| width="850pt" border="1" style="border-collapse:collapse" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Dense Real
* Dense Complex (Next Stable Release)
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Different Internal Formats (row-major, block)
* Unit Testing
|}
</center>
aab85849fa2f3ee1a6dfb17f3f6990e19dcc1668
8
7
2015-03-15T02:20:51Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''object oriented'', and 3) ''equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''Object oriented'' provides a simplified subset of the core capabilities in an easy to use API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to Matlab and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" width="500pt" align="center" |
{|width="280pt" style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.26''
|-
| '''Date:''' ''September 15, 2014''
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [http://code.google.com/p/efficient-java-matrix-library/issues/list Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
{| width="500pt" |
|-
|
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
'''Object Oriented'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="850pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Dense Real
* Dense Complex (Next Stable Release)
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Different Internal Formats (row-major, block)
* Unit Testing
|}
</center>
6eab7f8cb623dbcfe5fc14910fb271f01fd3351e
9
8
2015-03-15T02:41:21Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''object oriented'', and 3) ''equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''Object oriented'' provides a simplified subset of the core capabilities in an easy to use API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to Matlab and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" width="500pt" align="center" |
{|width="280pt" style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.26''
|-
| '''Date:''' ''September 15, 2014''
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [http://code.google.com/p/efficient-java-matrix-library/issues/list Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
{| width="500pt" |
|-
|
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
'''Object Oriented'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="850pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Dense Real
** Row-major
** Block
* Dense Complex (Soon!)
** Row-major
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}
</center>
e5a95d6d05e4be86d8a75a39dd43525741adbfa9
10
9
2015-03-15T02:47:20Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''object oriented'', and 3) ''equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''Object oriented'' provides a simplified subset of the core capabilities in an easy to use API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to Matlab and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" width="500pt" align="center" |
{|width="280pt" style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.26''
|-
| '''Date:''' ''September 15, 2014''
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [http://code.google.com/p/efficient-java-matrix-library/issues/list Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
{| width="500pt" |
|-
|
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
'''Object Oriented'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="850pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
** Incomplete Support
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}
</center>
8e2c9b55e5afdbb407ca0a5b18d648d9c75c55bf
11
10
2015-03-15T04:46:35Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''object oriented'', and 3) ''equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''Object oriented'' provides a simplified subset of the core capabilities in an easy to use API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to Matlab and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" width="500pt" align="center" |
{|width="280pt" style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.26''
|-
| '''Date:''' ''September 15, 2014''
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [http://code.google.com/p/efficient-java-matrix-library/issues/list Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Below are code examples demonstrating how to compute the Kalman gain, "K", using the three different interfaces in EJML.
{| width="500pt" |
|-
|
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
'''Object Oriented'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="850pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
** Incomplete Support
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}
</center>
0ca6baa29fa88c536f6d6163a74dfd56f622cb04
13
11
2015-03-15T05:19:37Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''object oriented'', and 3) ''equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''Object oriented'' provides a simplified subset of the core capabilities in an easy to use API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" width="500pt" align="center" |
{|width="280pt" style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.26''
|-
| '''Date:''' ''September 15, 2014''
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [http://code.google.com/p/efficient-java-matrix-library/issues/list Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Below are code examples demonstrating how to compute the Kalman gain, "K", using the three different interfaces in EJML.
{| width="500pt" |
|-
|
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
'''Object Oriented'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="850pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
** Incomplete Support
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
926be3a9e118f4102eead9449935c4ab6c4564be
16
13
2015-03-15T15:23:07Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''object oriented'', and 3) ''equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''Object oriented'' provides a simplified subset of the core capabilities in an easy to use API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" width="500pt" align="center" |
{|width="280pt" style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.26''
|-
| '''Date:''' ''September 15, 2014''
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [http://code.google.com/p/efficient-java-matrix-library/issues/list Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Below are code examples demonstrating how to compute the Kalman gain, "K", using the three different interfaces in EJML.
{| width="500pt" |
|-
|
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
'''Object Oriented'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="850pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
** Incomplete Support
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
998fa610ae51291d936473d8146cd7b3ed90ad5c
21
16
2015-03-22T00:06:08Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" width="500pt" align="center" |
{|width="280pt" style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.26''
|-
| '''Date:''' ''September 15, 2014''
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [http://code.google.com/p/efficient-java-matrix-library/issues/list Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="850pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
** Incomplete Support
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
36058795c0393c9e44178597fd0f30903abc3c91
23
21
2015-03-22T00:23:12Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" width="500pt" align="center" |
{|width="280pt" style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.26''
|-
| '''Date:''' ''September 15, 2014''
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [http://code.google.com/p/efficient-java-matrix-library/issues/list Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="850pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
** Incomplete Support
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
8cff0a93eeceeb679d117c0b453358bc19d82356
File:Ejml logo.gif
6
2
2
2015-03-14T23:51:33Z
Peter
1
Large logo with EJML text.
wikitext
text/x-wiki
Large logo with EJML text.
2b3b044d8d441374e2692c8900031199ab0ec11a
Users
0
3
12
2015-03-15T05:05:28Z
Peter
1
Created page with "= Projects which use EJML = Feel free to add your own project! * [http://wiki.industrial-craft.net Industrial Craft 2] modification for minecraft * [http://www-lium.univ-lem..."
wikitext
text/x-wiki
= Projects which use EJML =
Feel free to add your own project!
* [http://wiki.industrial-craft.net Industrial Craft 2] modification for minecraft
* [http://www-lium.univ-lemans.fr/diarization/doku.php/ LIUM_SpkDiarization] is a software dedicated to speaker diarization (ie speaker segmentation and clustering).
* [http://researchers.lille.inria.fr/~freno/JProGraM.html JProGraM]: Library for learning a number of statistical models from data.
* [http://code.google.com/p/gogps/ goGPS]: Improve the positioning accuracy of low-cost GPS devices by RTK technique.
* [http://www-edc.eng.cam.ac.uk/tools/set_visualiser/ Set Visualiser]: Visualises the way that a number of items is classified into one or more categories or sets using Euler diagrams.
* Universal Java Matrix Library (UJML): http://www.ujmp.org/
* Scalalab: http://code.google.com/p/scalalab/
* Java Content Based Image Retrieval (JCBIR): http://code.google.com/p/jcbir/
* JLabGroovy: http://code.google.com/p/jlabgroovy/
* JquantLib (Will be added): http://www.jquantlib.org/
* Matlube: https://github.com/hohonuuli/matlube
* Geometric Regression Library: http://georegression.org/
* BoofCV: Computer Vision Library: http://boofcv.org/
* ICY: bio-imaging: http://www.bioimageanalysis.com/icy/
* JSkills: Java implementation of TrueSkill algorithm https://github.com/nsp/JSkills
* Portfolio applets at http://www.christoph-junge.de/optimizer.php
* Distributed Control Framework (DCF) http://www.i-a-i.com/dcfpro/
* JptView point cloud viewer: http://www.seas.upenn.edu/~aiv/jptview/
* JPrIME Bayesian phylogenetics library: http://code.google.com/p/jprime/
* J-Matrix quantum mechanics scattering https://code.google.com/p/jmatrix/
* DDogleg Numerics: http://ddogleg.org
* Saddle: http://saddle.github.io/doc/index.html
* GDSC ImageJ Plugins: http://www.sussex.ac.uk/gdsc/intranet/microscopy/imagej/gdsc_plugins
* Robot Controller for Humanoid Robots: http://www.ihmc.us/Research/projects/HumanoidRobots/index.html
* Credit Analytics: http://code.google.com/p/creditanalytics
* Spline Library: http://code.google.com/p/splinelibrary - http://www.credit-trader.org/CreditSuite/docs/SplineLibrary_2.2.pdf
* Fixed Point Finder: http://code.google.com/p/rootfinder - http://www.credit-trader.org/CreditSuite/docs/FixedPointFinder_2.2.pdf
* Sensitivity generation scheme in Credit Analytics: http://www.credit-trader.org/CreditSuite/docs/SensitivityGenerator_2.2.pdf
* Stanford CoreNLP: A set of natural language analysis tools: http://nlp.stanford.edu/software/corenlp.shtml
* OpenChrom: Open source software for the mass spectrometric analysis of chromatographic data. https://www.openchrom.net
= Papers That Cite EJML =
* Zewdie, Dawit Dawit Habtamu. "Representation discovery in non-parametric reinforcement learning." Diss. Massachusetts Institute of Technology, 2014.
* Sanfilippo, Filippo, et al. "A mapping approach for controlling different maritime cranes and robots using ANN." Mechatronics and Automation (ICMA), 2014 IEEE International Conference on. IEEE, 2014.
* Kushman, Nate, et al. "Learning to automatically solve algebra word problems." ACL (1) (2014): 271-281.
* Stergios Papadimitriou, Seferina Mavroudi, Kostas Theofilatos, and Spiridon Likothanasis, “MATLAB-Like Scripting of Java Scientific Libraries in ScalaLab,” Scientific Programming, vol. 22, no. 3, pp. 187-199, 2014.
* Alberto Castellini, Daniele Paltrinieri, and Vincenzo Manca "MP-GeneticSynth: Inferring Biological Network Regulations from Time Series" Bioinformatics 2014
* Blasinski, H., Bulan, O., & Sharma, G. (2013). Per-Colorant-Channel Color Barcodes for Mobile Applications: An Interference Cancellation Framework.
* Marin, R. C., & Dobre, C. (2013, November). Reaching for the clouds: contextually enhancing smartphones for energy efficiency. In Proceedings of the 2nd ACM workshop on High performance mobile opportunistic systems (pp. 31-38). ACM.
* Oletic, D., Skrapec, M., & Bilas, V. (2013). Monitoring Respiratory Sounds: Compressed Sensing Reconstruction via OMP on Android Smartphone. In Wireless Mobile Communication and Healthcare (pp. 114-121). Springer Berlin Heidelberg.
* Santhiar, Anirudh and Pandita, Omesh and Kanade, Aditya "Discovering Math APIs by Mining Unit Tests" Fundamental Approaches to Software Engineering 2013
* Sanjay K. Boddhu, Robert L. Williams, Edward Wasser, Niranjan Kode, "Increasing Situational Awareness using Smartphones" Proc. SPIE 8389, Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR III, 83891J (May 1, 2012)
* J. A. Álvarez-Bermejo, N. Antequera, R. García-Rubio and J. A. López-Ramos, _"A scalable server for key distribution and its application to accounting,"_ The Journal of Supercomputing, 2012
* Realini E., Yoshida D., Reguzzoni M., Raghavan V., _"Enhanced satellite positioning as a web service with goGPS open source software"_. Applied Geomatics 4(2), 135-142. 2012
* Stergios Papadimitriou, Constantinos Terzidis, Seferina Mavroudi, Spiridon D. Likothanassis: _Exploiting java scientific libraries with the scala language within the scalalab environment._ IET Software 5(6): 543-551 (2011)
* L. T. Lim, B. Ranaivo-Malançon and E. K. Tang. _“Symbiosis Between a Multilingual Lexicon and Translation Example Banks”._ In: Procedia: Social and Behavioral Sciences 27 (2011), pp. 61–69.
* G. Taboada, S. Ramos, R. Expósito, J. Touriño, R. Doallo, _Java in the High Performance Computing arena: Research, practice and experience,_ Science of Computer Programming, 2011.
* http://geomatica.como.polimi.it/presentazioni/Osaka_Summer_goGPS.pdf
* http://www.holger-arndt.de/library/MLOSS2010.pdf
* http://www.ateji.com/px/whitepapers/Ateji%20PX%20MatMult%20Whitepaper%20v1.2.pdf
Note: Slowly working on an EJML paper for publication. About 1/2 way through a first draft.
= On The Web =
* http://code.google.com/p/java-matrix-benchmark/
* http://java.dzone.com/announcements/introduction-efficient-java
* https://shakthydoss.wordpress.com/2011/01/13/jama-shortcoming/
* Various questions on stackoverflow.com
919621d3464ab0f4893de5247dc0d3e0686779f2
Frequently Asked Questions
0
4
14
2015-03-15T14:36:15Z
Peter
1
Created page with "#summary Frequently Asked Questions = Frequently Asked Questions= Here is a list of frequently asked questions about EJML. Most of these questions have been asked and answe..."
wikitext
text/x-wiki
#summary Frequently Asked Questions
= Frequently Asked Questions=
Here is a list of frequently asked questions about EJML. Most of these questions have been asked and answered several times already.
== Why does EJML crash when I try to process a matrix with 1,000,000 by 1,000,000 elements, or some other very large matrix? ==
If you are working with large matrices first do a quick sanity check. Ask yourself, how much memory is that matrix using and can my computer physically store it? Compute the number of required gigabytes with the following equation:
memory in gigabytes = (columns * rows*8)/(1024*1024*1024)
Now take the number and multiply it by 3 or 4 to take in account overhead/working memory and that's about how much memory your system will need to do anything useful. This is true for ALL dense linear algebra libraries. EJML is also limited by the size of a Java array, which can have at most 2^32 elements. If you are lucky and the system is sparse (full of mostly zeros) there might be other libraries out there which can help you.
The other potentially fatal problem is that very large matrices are very slow to process. So even if you have enough RAM on your computer the time to compute the solution could well exceed the lifetime of a typical human.
== Will EJML work on Android? ==
Yes EJML has been used for quite some time on Android. The library does include a tinny bit of swing code, which will not cause any problems as long as you do not call anything related to visualization. In Android Studio simply reference the latest jar on the Maven central repository. See [Download] for how to do that.
== Multi-Threaded ==
Currently EJML is entirely single threaded. The plan is to max out single threaded performance by finishing block algorithms implementations, then declare the library to be at version 1.0. After that has happened, start work on multi-threaded implementations. However, there is no schedule in place for when all this will happen.
The main driving factor for when major new features are added is when I personally need such a feature. I'm starting to work on larger scale machine learning problems, so there might be a need soon. Another way to speed up the process is to volunteer your time and help develop it.
== Sparse Matrix Support ==
EJML's current focus is on dense matrices, but could be extended in the future to support sparse matrices. In the mean time the following libraries do provide some support for sparse matrices. Note: I have not used any of these libraries personally with sparse matrices.
* [https://sites.google.com/site/piotrwendykier/software/csparsej CSparseJ]
* [http://la4j.org/ la4j]
* [https://github.com/fommil/matrix-toolkits-java MTJ]
d65f7cd9e72ce547b9e7830c95bea70ad355ced3
17
14
2015-03-15T15:38:47Z
Peter
1
wikitext
text/x-wiki
#summary Frequently Asked Questions
= Frequently Asked Questions=
Here is a list of frequently asked questions about EJML. Most of these questions have been asked and answered several times already.
== Why does EJML crash when I try to process a very large matrix? ==
If you are working with large matrices first do a quick sanity check. Ask yourself, how much memory is that matrix using and can my computer physically store it? Compute the number of required gigabytes with the following equation:
memory in gigabytes = (columns * rows*8)/(1024*1024*1024)
Now take the number and multiply it by 3 or 4 to take in account overhead/working memory and that's about how much memory your system will need to do anything useful. This is true for ALL dense linear algebra libraries. EJML is also limited by the size of a Java array, which can have at most 2^32 elements. If you are lucky the system is sparse (mostly zeros) and there problem might actually be feasible using other libraries, see below.
The other potentially fatal problem is that very large matrices are very slow to process. So even if you have enough RAM on your computer the time to compute the solution could well exceed the lifetime of a typical human.
== Will EJML work on Android? ==
Yes EJML has been used for quite some time on Android. The library does include a tinny bit of swing code, which will not cause any problems as long as you do not call anything related to visualization. In Android Studio simply reference the latest jar on the Maven central repository. See [Download] for how to do that.
== Multi-Threaded ==
Currently EJML is entirely single threaded. The plan is to max out single threaded performance by finishing block algorithms implementations, then declare the library to be at version 1.0. After that has happened, start work on multi-threaded implementations. However, there is no schedule in place for when all this will happen.
The main driving factor for when major new features are added is when I personally need such a feature. I'm starting to work on larger scale machine learning problems, so there might be a need soon. Another way to speed up the process is to volunteer your time and help develop it.
== Sparse Matrix Support ==
EJML's current focus is on dense matrices, but could be extended in the future to support sparse matrices. In the mean time the following libraries do provide some support for sparse matrices. Note: I have not used any of these libraries personally with sparse matrices.
* [https://sites.google.com/site/piotrwendykier/software/csparsej CSparseJ]
* [http://la4j.org/ la4j]
* [https://github.com/fommil/matrix-toolkits-java MTJ]
3d320d46aa1222f1ffc692166559957235ed8bfa
36
17
2015-03-22T04:23:25Z
Peter
1
wikitext
text/x-wiki
#summary Frequently Asked Questions
= Frequently Asked Questions=
Here is a list of frequently asked questions about EJML. Most of these questions have been asked and answered several times already.
== Why does EJML crash when I try to process a very large matrix? ==
If you are working with large matrices first do a quick sanity check. Ask yourself, how much memory is that matrix using and can my computer physically store it? Compute the number of required gigabytes with the following equation:
memory in gigabytes = (columns * rows*8)/(1024*1024*1024)
Now take the number and multiply it by 3 or 4 to take in account overhead/working memory and that's about how much memory your system will need to do anything useful. This is true for ALL dense linear algebra libraries. EJML is also limited by the size of a Java array, which can have at most 2^32 elements. If you are lucky the system is sparse (mostly zeros) and there problem might actually be feasible using other libraries, see below.
The other potentially fatal problem is that very large matrices are very slow to process. So even if you have enough RAM on your computer the time to compute the solution could well exceed the lifetime of a typical human.
== Will EJML work on Android? ==
Yes EJML has been used for quite some time on Android. The library does include a tinny bit of swing code, which will not cause any problems as long as you do not call anything related to visualization. In Android Studio simply reference the latest jar on the Maven central repository. See [Download] for how to do that.
== Multi-Threaded ==
Currently EJML is entirely single threaded. The plan is to max out single threaded performance by finishing block algorithms implementations, then declare the library to be at version 1.0. After that has happened, start work on multi-threaded implementations. However, there is no schedule in place for when all this will happen.
The main driving factor for when major new features are added is when I personally need such a feature. I'm starting to work on larger scale machine learning problems, so there might be a need soon. Another way to speed up the process is to volunteer your time and help develop it.
== Sparse Matrix Support ==
EJML's current focus is on dense matrices, but could be extended in the future to support sparse matrices. In the mean time the following libraries do provide some support for sparse matrices. Note: I have not used any of these libraries personally with sparse matrices.
* [https://sites.google.com/site/piotrwendykier/software/csparsej CSparseJ]
* [http://la4j.org/ la4j]
* [https://github.com/fommil/matrix-toolkits-java MTJ]
== What version of Java? ==
EJML can be compiled with Java 1.6 and beyond. With a few minor modifications to the source code you can get it to compile with 1.5.
34e910893cad22ff0099689fb04da17a5c6a7078
Acknowledgments
0
5
15
2015-03-15T14:56:28Z
Peter
1
Created page with "== Development == EJML has been developed almost entirely by [https://www.linkedin.com/profile/view?id=9580871 Peter Abeles] in his spare time. Much of the development of EJ..."
wikitext
text/x-wiki
== Development ==
EJML has been developed almost entirely by [https://www.linkedin.com/profile/view?id=9580871 Peter Abeles] in his spare time. Much of the development of EJML was inspired by his frustration with existing libraries at that time. They had very poor performance with small matrices, excessive memory creation/destruction, (arguably) not the best API, and tended to be quickly abandoned by their developers after decided he liked one. The status of Java numerical libraries has improved since then in general.
Additional thanks should go towards the [http://ihmc.us Institute for Human Machine Cognition] (IHMC) which encouraged the continued development of EJML and even commissioned the inclusion of the first few complex matrix operations after he had left.
All the feedback and bug reports from its users have also had a significant influence on this library. Without their encouragement and help it would be less stable and much less flushed out than it is today. The book [http://www.amazon.com/gp/product/0470528338/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0470528338 Fundamentals of Matrix Computations by David S. Watkins] also significantly influence the development of the library in its early days. It is probably the best introduction to to the computational side of linear algebra written so far and includes many important implementation details left out in other books.
== Dependencies ==
EJML is entirely self contained and is only dependent on JUnit for tests.
* http://www.junit.org/
e1b3906f72a292467dc5f533c79b1e2974377ad1
Download
0
6
18
2015-03-16T11:09:17Z
Peter
1
Created page with "== Source Code == Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always! [https:..."
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.26/ EJML Downloads]
== Gradle ==
Add the following to include it in your Gradle project. Got to love how brief it is compared to Maven.
<syntaxhighlight lang="groovy">
['core','dense64','denseC64','simple','equation'].each { String a ->
compile group: 'org.ejml', name: a, version: '0.27-SNAPSHOT'}
</syntaxhighlight>
== Maven ==
To include the latest stable code in your Maven project add the following to you dependency list.
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>core</artifactId>
<version>0.27-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.ejml</groupId>
<artifactId>dense64</artifactId>
<version>0.27-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.ejml</groupId>
<artifactId>denseC64</artifactId>
<version>0.27-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.ejml</groupId>
<artifactId>simple</artifactId>
<version>0.27-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.ejml</groupId>
<artifactId>equation</artifactId>
<version>0.27-SNAPSHOT</version>
</dependency>
</syntaxhighlight>
d7bc3f118774069d3d6ff508053c900d87a35e94
Performance
0
7
19
2015-03-21T23:24:02Z
Peter
1
Created page with "= How does EJML compare? = There are several issues to consider when selecting a linear algebra library; runtime speed, memory consumption, and stability. All three are very..."
wikitext
text/x-wiki
= How does EJML compare? =
There are several issues to consider when selecting a linear algebra library; runtime speed, memory consumption, and stability. All three are very important but speed tends to get the most attention. [https://code.google.com/p/java-matrix-benchmark/ Java Matrix Benchmark] was developed at the same time as EJML and is used to evaluate the most popular linear algebra libraries written in Java. The general takeaway from those results is that EJML is one of the fastest single threaded libraries and in many instances is competitive with multi-threaded libraries. It is among the most stable and more memory efficient too.
<center>
[https://code.google.com/p/java-matrix-benchmark/ http://java-matrix-benchmark.googlecode.com/svn/wiki/RuntimeCorei7v2600_2013_10.attach/summary.png]
</center>
= Fastest Interface? =
Another question when using EJML is: ''Which interface should I use for high performance computing?'' In general you can get the most performance out of the procedural interface. However, there are times that the added complexity of using that interface isn't worth it. For example, if you are working with very large matrices the object oriented [SimpleMatrix] is almost as fast. Below are benchmarking results comparing the different interfaces in BoofCV.
== Relative Runtime Plots ==
Results are presented using relative runtime plots. These plots show how fast each interface is relative to the other. The fastest interface at each matrix size always has a value of one since it can perform the most operations per second. For more information see the Java Matrix Benchmark manual here.
Looking at the addition plot, SimpleMatrix runs at about 0.25 times the speed as using DenseMatrix64F for smaller matrices. When it processes larger matrices it runs at about 0.6 times the speed of the operations interface. This means that for larger matrices it runs relative faster. For more expensive operations (SVD, solve, matrix multiplication, etc ) it is clear that the difference in performance is not significant for matrices that are 100 by 100 or larger.
EJML is EJML using the operations interface and SEJML is EJML using SimpleMatrix.
== Test Environment ==
{| class="wikitable" |
! Date !! July 4, 2010
|-
| OS || Vista 64bit
|-
| CPU || Q9400 - 2.66 Ghz - 4 cores
|-
| JVM || Java HotSpot?(TM) 64-Bit Server VM 1.6.0_16
|-
| Benchmark || 0.7pre
|-
| EJML || 0.14pre
|}
TODO recompute these results with Equations and move the files to a local directory
== Basic Operation ==
{|
|-
| http://efficient-java-matrix-library.googlecode.com/svn/wiki/SpeedSimpleMatrix.attach/add.png || http://efficient-java-matrix-library.googlecode.com/svn/wiki/SpeedSimpleMatrix.attach/scale.png
|-
| http://efficient-java-matrix-library.googlecode.com/svn/wiki/SpeedSimpleMatrix.attach/mult.png || http://efficient-java-matrix-library.googlecode.com/svn/wiki/SpeedSimpleMatrix.attach/inv.png
|-
| http://efficient-java-matrix-library.googlecode.com/svn/wiki/SpeedSimpleMatrix.attach/det.png || http://efficient-java-matrix-library.googlecode.com/svn/wiki/SpeedSimpleMatrix.attach/tran.png
|-
|}
== Solving and Decompositions ==
{|
|-
| http://efficient-java-matrix-library.googlecode.com/svn/wiki/SpeedSimpleMatrix.attach/solveEq.png ||http://efficient-java-matrix-library.googlecode.com/svn/wiki/SpeedSimpleMatrix.attach/solveOver.png
|-
| http://efficient-java-matrix-library.googlecode.com/svn/wiki/SpeedSimpleMatrix.attach/svd.png || http://efficient-java-matrix-library.googlecode.com/svn/wiki/SpeedSimpleMatrix.attach/EigSymm.png
|}
4c5829f6fb6ef0244a48a9509b1c46853e26d638
Manual
0
8
20
2015-03-22T00:03:39Z
Peter
1
Created page with "= The Basics = What exactly is Efficient Java Matrix Library (EJML)? EJML is a Java library for performing standard linear algebra operations on dense matrices. Typically t..."
wikitext
text/x-wiki
= The Basics =
What exactly is Efficient Java Matrix Library (EJML)? EJML is a Java library for performing standard linear algebra operations on dense matrices. Typically the list of standard operations is divided up unto basic (addition, subtraction, multiplication, ...etc), decompositions (LU, QR, SVD, ... etc), and solving linear systems. A complete list of its core functionality can be found on the [[Capabilities]] page.
This manual describes how to use and develop an application using EJML. Other questions, like how to build or include it in your project, is provided in the list below. If you have a question which isn't answered or is confusion feel free to post a question on the message board! Instructions on how to use EJML is primarily done in this manual through example, see below. The examples are selected from common real-world problems, such as Kalman filters. Some times the same example is provided in three different formats using one of the three interfaces provided in EJML to help you understand the differences.
* [[Download|Download and Building]]
* [[FAQ|Frequently Asked Questions]]
* [http://groups.google.com/group/efficient-java-matrix-library-discuss| Message Board]
== The Interfaces ==
A primary design goal of EJML was to provide users the capability to write both highly optimized code and easy to read/write code. Since it's hard to do this with a single API BoofCV provides three different ways to interact with it.
* [[Procedural]]: You have full access to all of EJML's capabilities, can select individual algorithms, and almost complete control over memory. The downside is it feels a bit like you're programming in assembly and it's tedious to have that much control over memory.
* [[SimpleMatrix]]: An object oriented API that allows you to connect multiple operations together using a flow strategy, which is much easier to read and write. Limited subset of operations are supported and memory is constantly created and destroyed.
* [[Equations]]: Is a symbolic interface that allows you to manipulate matrices in a similar manor to Matlab/Octave. Can be precompiled and won't declare new memory if the input size doesn't change. It's a bit of a black box and the compiler isn't smart enough to pick the most efficient functions.
Example of compute the Kalman gain "K"
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
It's hard to say which interface is the best. If you are dealing with small matrices and need to write highly optimized code then ''Procedural'' is the way to go. For large matrices it doesn't really matter which one you use since the overhead is insignificant compared to the matrix operations. If you want to write something quickly then [[SimpleMatrix]] or [[Equations]] is the way to go. For those of you who are concerned about performance, I recommend coding it up first using SimpleMatrix or Equations then benchmarking to see if that code is a bottleneck. Much easier to debug that way.
[[Performance|Comparison of Interface Runtime Performance]]
= Example Code =
= Tutorials =
= External References =
b92c44d955ea88cee495cf9574e3e54ac51bb783
22
20
2015-03-22T00:19:43Z
Peter
1
wikitext
text/x-wiki
= The Basics =
What exactly is Efficient Java Matrix Library (EJML)? EJML is a Java library for performing standard linear algebra operations on dense matrices. Typically the list of standard operations is divided up unto basic (addition, subtraction, multiplication, ...etc), decompositions (LU, QR, SVD, ... etc), and solving linear systems. A complete list of its core functionality can be found on the [[Capabilities]] page.
This manual describes how to use and develop an application using EJML. Other questions, like how to build or include it in your project, is provided in the list below. If you have a question which isn't answered or is confusion feel free to post a question on the message board! Instructions on how to use EJML is primarily done in this manual through example, see below. The examples are selected from common real-world problems, such as Kalman filters. Some times the same example is provided in three different formats using one of the three interfaces provided in EJML to help you understand the differences.
* [[Download|Download and Building]]
* [[Frequently Asked Questions|Frequently Asked Questions]]
* [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
== The Interfaces ==
A primary design goal of EJML was to provide users the capability to write both highly optimized code and easy to read/write code. Since it's hard to do this with a single API BoofCV provides three different ways to interact with it.
* [[Procedural]]: You have full access to all of EJML's capabilities, can select individual algorithms, and almost complete control over memory. The downside is it feels a bit like you're programming in assembly and it's tedious to have that much control over memory.
* [[SimpleMatrix]]: An object oriented API that allows you to connect multiple operations together using a flow strategy, which is much easier to read and write. Limited subset of operations are supported and memory is constantly created and destroyed.
* [[Equations]]: Is a symbolic interface that allows you to manipulate matrices in a similar manor to Matlab/Octave. Can be precompiled and won't declare new memory if the input size doesn't change. It's a bit of a black box and the compiler isn't smart enough to pick the most efficient functions.
Example of compute the Kalman gain "K"
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
It's hard to say which interface is the best. If you are dealing with small matrices and need to write highly optimized code then ''Procedural'' is the way to go. For large matrices it doesn't really matter which one you use since the overhead is insignificant compared to the matrix operations. If you want to write something quickly then [[SimpleMatrix]] or [[Equations]] is the way to go. For those of you who are concerned about performance, I recommend coding it up first using SimpleMatrix or Equations then benchmarking to see if that code is a bottleneck. Much easier to debug that way.
[[Performance|Comparison of Interface Runtime Performance]]
== Matrix Types ==
EJML provides support for the following matrix types:
* Dense Real
* Dense Complex
* Fixed Sized Real
= Tutorials =
* Matlab to EJML
* Solving Linear Systems
* Matrix Decompositions
* Fixed Sized Matrices
* Customizing SimpleMatrix
* Random matrices, Matrix Features, and Matrix Norms
* Extracting and Inserting submatrices and vectors
* Matrix Input/Output
* Unit Testing
= Example Code =
{| class="wikitable" border="1" |
! Name !! Procedural !! SimpleMatrix !! Equations
|-
| Kalman Filter || X || X || X
|-
| Levenberg-Marquardt || X || X || X
|-
| Polynomial Fitting || X || X || X
|-
| Principal Component Analysis || X || X || X
|-
| Finding roots of Polynomials || X || X || X
|}
= External References =
Want to learn more about how EJML works to write more effective code and employ more advanced techniques? Understand where EJML's logo comes from? The following books are recommended reading and made EJML's early development possible.
* Best introduction to the subject that balances clarity without sacrificing important implementation details:
** [http://www.amazon.com/gp/product/0470528338/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0470528338 Fundamentals of Matrix Computations by David S. Watkins]
* Classic reference book that tersely covers hundreds of algorithms
** [http://www.amazon.com/gp/product/0801854148/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0801854148 Matrix Computations by G. Golub and C. Van Loan]
* Popular book on linear algebra
** [http://www.amazon.com/gp/product/0030105676/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0030105676 Linear Algebra and Its Applications by Gilbert Strang]
Purchasing through these links will help EJML's developer buy high end ramen noodles.
da4b503c774d57cc7c50a79bf3bd634ec042e7fe
24
22
2015-03-22T03:23:02Z
Peter
1
wikitext
text/x-wiki
= The Basics =
What exactly is Efficient Java Matrix Library (EJML)? EJML is a Java library for performing standard linear algebra operations on dense matrices. Typically the list of standard operations is divided up unto basic (addition, subtraction, multiplication, ...etc), decompositions (LU, QR, SVD, ... etc), and solving linear systems. A complete list of its core functionality can be found on the [[Capabilities]] page.
This manual describes how to use and develop an application using EJML. Other questions, like how to build or include it in your project, is provided in the list below. If you have a question which isn't answered or is confusion feel free to post a question on the message board! Instructions on how to use EJML is primarily done in this manual through example, see below. The examples are selected from common real-world problems, such as Kalman filters. Some times the same example is provided in three different formats using one of the three interfaces provided in EJML to help you understand the differences.
* [[Download|Download and Building]]
* [[Frequently Asked Questions|Frequently Asked Questions]]
* [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
== The Interfaces ==
A primary design goal of EJML was to provide users the capability to write both highly optimized code and easy to read/write code. Since it's hard to do this with a single API BoofCV provides three different ways to interact with it.
* [[Procedural]]: You have full access to all of EJML's capabilities, can select individual algorithms, and almost complete control over memory. The downside is it feels a bit like you're programming in assembly and it's tedious to have that much control over memory.
* [[SimpleMatrix]]: An object oriented API that allows you to connect multiple operations together using a flow strategy, which is much easier to read and write. Limited subset of operations are supported and memory is constantly created and destroyed.
* [[Equations]]: Is a symbolic interface that allows you to manipulate matrices in a similar manor to Matlab/Octave. Can be precompiled and won't declare new memory if the input size doesn't change. It's a bit of a black box and the compiler isn't smart enough to pick the most efficient functions.
Example of compute the Kalman gain "K"
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
It's hard to say which interface is the best. If you are dealing with small matrices and need to write highly optimized code then ''Procedural'' is the way to go. For large matrices it doesn't really matter which one you use since the overhead is insignificant compared to the matrix operations. If you want to write something quickly then [[SimpleMatrix]] or [[Equations]] is the way to go. For those of you who are concerned about performance, I recommend coding it up first using SimpleMatrix or Equations then benchmarking to see if that code is a bottleneck. Much easier to debug that way.
[[Performance|Comparison of Interface Runtime Performance]]
== Matrix Types ==
EJML provides support for the following matrix types:
* Dense Real
* Dense Complex
* Fixed Sized Real
= Tutorials =
* [[Matlab to EJML|Matlab to EJML]]
* [[Solving Linear Systems|Solving Linear Systems]]
* [[Matrix Decompositions|Matrix Decompositions]]
* [[Fixed Sized Matrices]]
* [[Customizing SimpleMatrix]]
* [[Random matrices, Matrix Features, and Matrix Norms]]
* [[Extract and Insert|Extracting and Inserting submatrices and vectors]]
* [[Input and Output|Matrix Input/Output]]
* [[Unit Testing]]
= Example Code =
The follow are code examples of common linear algebra problems intended to demonstrate different parts of EJML. In the table below it indicates which interface or interfaces the example uses.
{| class="wikitable" border="1" |
! Name !! Procedural !! SimpleMatrix !! Equations
|-
| [[Example Kalman Filter|Kalman Filter]] || X || X || X
|-
| [[Example Levenberg-Marquardt|Levenberg-Marquardt]] || X || X || X
|-
| [[Example Principal Component Analysis|Principal Component Analysis]] || X || X || X
|-
| [[Example Principal Component Analysis|Polynomial Fitting]] || X || X || X
|-
| [[Example Polynomial Roots|Polynomial Roots]] || X || X || X
|}
= External References =
Want to learn more about how EJML works to write more effective code and employ more advanced techniques? Understand where EJML's logo comes from? The following books are recommended reading and made EJML's early development possible.
* Best introduction to the subject that balances clarity without sacrificing important implementation details:
** [http://www.amazon.com/gp/product/0470528338/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0470528338 Fundamentals of Matrix Computations by David S. Watkins]
* Classic reference book that tersely covers hundreds of algorithms
** [http://www.amazon.com/gp/product/0801854148/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0801854148 Matrix Computations by G. Golub and C. Van Loan]
* Popular book on linear algebra
** [http://www.amazon.com/gp/product/0030105676/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0030105676 Linear Algebra and Its Applications by Gilbert Strang]
Purchasing through these links will help EJML's developer buy high end ramen noodles.
e8763b6734add86cfe3743e7b9644093d8b80560
35
24
2015-03-22T04:22:03Z
Peter
1
wikitext
text/x-wiki
= The Basics =
What exactly is Efficient Java Matrix Library (EJML)? EJML is a Java library for performing standard linear algebra operations on dense matrices. Typically the list of standard operations is divided up unto basic (addition, subtraction, multiplication, ...etc), decompositions (LU, QR, SVD, ... etc), and solving linear systems. A complete list of its core functionality can be found on the [[Capabilities]] page.
This manual describes how to use and develop an application using EJML. Other questions, like how to build or include it in your project, is provided in the list below. If you have a question which isn't answered or is confusion feel free to post a question on the message board! Instructions on how to use EJML is primarily done in this manual through example, see below. The examples are selected from common real-world problems, such as Kalman filters. Some times the same example is provided in three different formats using one of the three interfaces provided in EJML to help you understand the differences.
* [[Download|Download and Building]]
* [[Frequently Asked Questions|Frequently Asked Questions]]
* [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
EJML is compatible with Java 1.6 and beyond.
== The Interfaces ==
A primary design goal of EJML was to provide users the capability to write both highly optimized code and easy to read/write code. Since it's hard to do this with a single API BoofCV provides three different ways to interact with it.
* [[Procedural]]: You have full access to all of EJML's capabilities, can select individual algorithms, and almost complete control over memory. The downside is it feels a bit like you're programming in assembly and it's tedious to have that much control over memory.
* [[SimpleMatrix]]: An object oriented API that allows you to connect multiple operations together using a flow strategy, which is much easier to read and write. Limited subset of operations are supported and memory is constantly created and destroyed.
* [[Equations]]: Is a symbolic interface that allows you to manipulate matrices in a similar manor to Matlab/Octave. Can be precompiled and won't declare new memory if the input size doesn't change. It's a bit of a black box and the compiler isn't smart enough to pick the most efficient functions.
Example of compute the Kalman gain "K"
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
It's hard to say which interface is the best. If you are dealing with small matrices and need to write highly optimized code then ''Procedural'' is the way to go. For large matrices it doesn't really matter which one you use since the overhead is insignificant compared to the matrix operations. If you want to write something quickly then [[SimpleMatrix]] or [[Equations]] is the way to go. For those of you who are concerned about performance, I recommend coding it up first using SimpleMatrix or Equations then benchmarking to see if that code is a bottleneck. Much easier to debug that way.
[[Performance|Comparison of Interface Runtime Performance]]
== Matrix Types ==
EJML provides support for the following matrix types:
* Dense Real
* Dense Complex
* Fixed Sized Real
= Tutorials =
* [[Matlab to EJML|Matlab to EJML]]
* [[Solving Linear Systems|Solving Linear Systems]]
* [[Matrix Decompositions|Matrix Decompositions]]
* [[Fixed Sized Matrices]]
* [[Customizing SimpleMatrix]]
* [[Random matrices, Matrix Features, and Matrix Norms]]
* [[Extract and Insert|Extracting and Inserting submatrices and vectors]]
* [[Input and Output|Matrix Input/Output]]
* [[Unit Testing]]
= Example Code =
The follow are code examples of common linear algebra problems intended to demonstrate different parts of EJML. In the table below it indicates which interface or interfaces the example uses.
{| class="wikitable" border="1" |
! Name !! Procedural !! SimpleMatrix !! Equations
|-
| [[Example Kalman Filter|Kalman Filter]] || X || X || X
|-
| [[Example Levenberg-Marquardt|Levenberg-Marquardt]] || X || X || X
|-
| [[Example Principal Component Analysis|Principal Component Analysis]] || X || X || X
|-
| [[Example Principal Component Analysis|Polynomial Fitting]] || X || X || X
|-
| [[Example Polynomial Roots|Polynomial Roots]] || X || X || X
|}
= External References =
Want to learn more about how EJML works to write more effective code and employ more advanced techniques? Understand where EJML's logo comes from? The following books are recommended reading and made EJML's early development possible.
* Best introduction to the subject that balances clarity without sacrificing important implementation details:
** [http://www.amazon.com/gp/product/0470528338/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0470528338 Fundamentals of Matrix Computations by David S. Watkins]
* Classic reference book that tersely covers hundreds of algorithms
** [http://www.amazon.com/gp/product/0801854148/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0801854148 Matrix Computations by G. Golub and C. Van Loan]
* Popular book on linear algebra
** [http://www.amazon.com/gp/product/0030105676/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0030105676 Linear Algebra and Its Applications by Gilbert Strang]
Purchasing through these links will help EJML's developer buy high end ramen noodles.
5ec9c4388bc27751c6f85472e5c5940e66065a7f
38
35
2015-03-22T04:31:54Z
Peter
1
wikitext
text/x-wiki
= The Basics =
What exactly is Efficient Java Matrix Library (EJML)? EJML is a Java library for performing standard linear algebra operations on dense matrices. Typically the list of standard operations is divided up unto basic (addition, subtraction, multiplication, ...etc), decompositions (LU, QR, SVD, ... etc), and solving linear systems. A complete list of its core functionality can be found on the [[Capabilities]] page.
This manual describes how to use and develop an application using EJML. Other questions, like how to build or include it in your project, is provided in the list below. If you have a question which isn't answered or is confusion feel free to post a question on the message board! Instructions on how to use EJML is primarily done in this manual through example, see below. The examples are selected from common real-world problems, such as Kalman filters. Some times the same example is provided in three different formats using one of the three interfaces provided in EJML to help you understand the differences.
* [[Download|Download and Building]]
* [[Frequently Asked Questions|Frequently Asked Questions]]
* [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
EJML is compatible with Java 1.6 and beyond.
== The Interfaces ==
A primary design goal of EJML was to provide users the capability to write both highly optimized code and easy to read/write code. Since it's hard to do this with a single API BoofCV provides three different ways to interact with it.
* [[Procedural]]: You have full access to all of EJML's capabilities, can select individual algorithms, and almost complete control over memory. The downside is it feels a bit like you're programming in assembly and it's tedious to have that much control over memory.
* [[SimpleMatrix]]: An object oriented API that allows you to connect multiple operations together using a flow strategy, which is much easier to read and write. Limited subset of operations are supported and memory is constantly created and destroyed.
* [[Equations]]: Is a symbolic interface that allows you to manipulate matrices in a similar manor to Matlab/Octave. Can be precompiled and won't declare new memory if the input size doesn't change. It's a bit of a black box and the compiler isn't smart enough to pick the most efficient functions.
Example of compute the Kalman gain "K"
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
It's hard to say which interface is the best. If you are dealing with small matrices and need to write highly optimized code then ''Procedural'' is the way to go. For large matrices it doesn't really matter which one you use since the overhead is insignificant compared to the matrix operations. If you want to write something quickly then [[SimpleMatrix]] or [[Equations]] is the way to go. For those of you who are concerned about performance, I recommend coding it up first using SimpleMatrix or Equations then benchmarking to see if that code is a bottleneck. Much easier to debug that way.
[[Performance|Comparison of Interface Runtime Performance]]
== Matrix Types ==
EJML provides support for the following matrix types:
* Dense Real
* Dense Complex
* Fixed Sized Real
= Tutorials =
* [[Matlab to EJML|Matlab to EJML]]
* [[Solving Linear Systems|Solving Linear Systems]]
* [[Matrix Decompositions|Matrix Decompositions]]
* [[Fixed Sized Matrices]]
* [[Customizing SimpleMatrix]]
* [[Random matrices, Matrix Features, and Matrix Norms]]
* [[Extract and Insert|Extracting and Inserting submatrices and vectors]]
* [[Input and Output|Matrix Input/Output]]
* [[Unit Testing]]
= Example Code =
The follow are code examples of common linear algebra problems intended to demonstrate different parts of EJML. In the table below it indicates which interface or interfaces the example uses.
{| class="wikitable" border="1" |
! Name !! Procedural !! SimpleMatrix !! Equations
|-
| [[Example Kalman Filter|Kalman Filter]] || X || X || X
|-
| [[Example Levenberg-Marquardt|Levenberg-Marquardt]] || X || ||
|-
| [[Example Principal Component Analysis|Principal Component Analysis]] || X || X || X
|-
| [[Example Principal Component Analysis|Polynomial Fitting]] || X || X || X
|-
| [[Example Polynomial Roots|Polynomial Roots]] || X || X || X
|}
= External References =
Want to learn more about how EJML works to write more effective code and employ more advanced techniques? Understand where EJML's logo comes from? The following books are recommended reading and made EJML's early development possible.
* Best introduction to the subject that balances clarity without sacrificing important implementation details:
** [http://www.amazon.com/gp/product/0470528338/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0470528338 Fundamentals of Matrix Computations by David S. Watkins]
* Classic reference book that tersely covers hundreds of algorithms
** [http://www.amazon.com/gp/product/0801854148/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0801854148 Matrix Computations by G. Golub and C. Van Loan]
* Popular book on linear algebra
** [http://www.amazon.com/gp/product/0030105676/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0030105676 Linear Algebra and Its Applications by Gilbert Strang]
Purchasing through these links will help EJML's developer buy high end ramen noodles.
89e69d0a117ae446c73d50277b4f95045342fab8
39
38
2015-03-22T05:25:31Z
Peter
1
wikitext
text/x-wiki
= The Basics =
What exactly is Efficient Java Matrix Library (EJML)? EJML is a Java library for performing standard linear algebra operations on dense matrices. Typically the list of standard operations is divided up unto basic (addition, subtraction, multiplication, ...etc), decompositions (LU, QR, SVD, ... etc), and solving linear systems. A complete list of its core functionality can be found on the [[Capabilities]] page.
This manual describes how to use and develop an application using EJML. Other questions, like how to build or include it in your project, is provided in the list below. If you have a question which isn't answered or is confusion feel free to post a question on the message board! Instructions on how to use EJML is primarily done in this manual through example, see below. The examples are selected from common real-world problems, such as Kalman filters. Some times the same example is provided in three different formats using one of the three interfaces provided in EJML to help you understand the differences.
* [[Download|Download and Building]]
* [[Frequently Asked Questions|Frequently Asked Questions]]
* [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
EJML is compatible with Java 1.6 and beyond.
== The Interfaces ==
A primary design goal of EJML was to provide users the capability to write both highly optimized code and easy to read/write code. Since it's hard to do this with a single API BoofCV provides three different ways to interact with it.
* [[Procedural]]: You have full access to all of EJML's capabilities, can select individual algorithms, and almost complete control over memory. The downside is it feels a bit like you're programming in assembly and it's tedious to have that much control over memory.
* [[SimpleMatrix]]: An object oriented API that allows you to connect multiple operations together using a flow strategy, which is much easier to read and write. Limited subset of operations are supported and memory is constantly created and destroyed.
* [[Equations]]: Is a symbolic interface that allows you to manipulate matrices in a similar manor to Matlab/Octave. Can be precompiled and won't declare new memory if the input size doesn't change. It's a bit of a black box and the compiler isn't smart enough to pick the most efficient functions.
Example of compute the Kalman gain "K"
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
It's hard to say which interface is the best. If you are dealing with small matrices and need to write highly optimized code then ''Procedural'' is the way to go. For large matrices it doesn't really matter which one you use since the overhead is insignificant compared to the matrix operations. If you want to write something quickly then [[SimpleMatrix]] or [[Equations]] is the way to go. For those of you who are concerned about performance, I recommend coding it up first using SimpleMatrix or Equations then benchmarking to see if that code is a bottleneck. Much easier to debug that way.
[[Performance|Comparison of Interface Runtime Performance]]
== Matrix Types ==
EJML provides support for the following matrix types:
* Dense Real
* Dense Complex
* Fixed Sized Real
= Tutorials =
* [[Matlab to EJML|Matlab to EJML]]
* [[Complex Math|Complex Math]]
* [[Solving Linear Systems|Solving Linear Systems]]
* [[Matrix Decompositions|Matrix Decompositions]]
* [[Fixed Sized Matrices]]
* [[Customizing SimpleMatrix]]
* [[Random matrices, Matrix Features, and Matrix Norms]]
* [[Extract and Insert|Extracting and Inserting submatrices and vectors]]
* [[Input and Output|Matrix Input/Output]]
* [[Unit Testing]]
= Example Code =
The follow are code examples of common linear algebra problems intended to demonstrate different parts of EJML. In the table below it indicates which interface or interfaces the example uses.
{| class="wikitable" border="1" |
! Name !! Procedural !! SimpleMatrix !! Equations
|-
| [[Example Kalman Filter|Kalman Filter]] || X || X || X
|-
| [[Example Levenberg-Marquardt|Levenberg-Marquardt]] || X || ||
|-
| [[Example Principal Component Analysis|Principal Component Analysis]] || X || X || X
|-
| [[Example Principal Component Analysis|Polynomial Fitting]] || X || X || X
|-
| [[Example Polynomial Roots|Polynomial Roots]] || X || X || X
|}
= External References =
Want to learn more about how EJML works to write more effective code and employ more advanced techniques? Understand where EJML's logo comes from? The following books are recommended reading and made EJML's early development possible.
* Best introduction to the subject that balances clarity without sacrificing important implementation details:
** [http://www.amazon.com/gp/product/0470528338/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0470528338 Fundamentals of Matrix Computations by David S. Watkins]
* Classic reference book that tersely covers hundreds of algorithms
** [http://www.amazon.com/gp/product/0801854148/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0801854148 Matrix Computations by G. Golub and C. Van Loan]
* Popular book on linear algebra
** [http://www.amazon.com/gp/product/0030105676/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0030105676 Linear Algebra and Its Applications by Gilbert Strang]
Purchasing through these links will help EJML's developer buy high end ramen noodles.
55fc7a765212cd618cb3778faa65b8aae3501225
41
39
2015-03-22T05:32:21Z
Peter
1
wikitext
text/x-wiki
= The Basics =
What exactly is Efficient Java Matrix Library (EJML)? EJML is a Java library for performing standard linear algebra operations on dense matrices. Typically the list of standard operations is divided up unto basic (addition, subtraction, multiplication, ...etc), decompositions (LU, QR, SVD, ... etc), and solving linear systems. A complete list of its core functionality can be found on the [[Capabilities]] page.
This manual describes how to use and develop an application using EJML. Other questions, like how to build or include it in your project, is provided in the list below. If you have a question which isn't answered or is confusion feel free to post a question on the message board! Instructions on how to use EJML is primarily done in this manual through example, see below. The examples are selected from common real-world problems, such as Kalman filters. Some times the same example is provided in three different formats using one of the three interfaces provided in EJML to help you understand the differences.
* [[Download|Download and Building]]
* [[Frequently Asked Questions|Frequently Asked Questions]]
* [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
EJML is compatible with Java 1.6 and beyond.
== The Interfaces ==
A primary design goal of EJML was to provide users the capability to write both highly optimized code and easy to read/write code. Since it's hard to do this with a single API BoofCV provides three different ways to interact with it.
* [[Procedural]]: You have full access to all of EJML's capabilities, can select individual algorithms, and almost complete control over memory. The downside is it feels a bit like you're programming in assembly and it's tedious to have that much control over memory.
* [[SimpleMatrix]]: An object oriented API that allows you to connect multiple operations together using a flow strategy, which is much easier to read and write. Limited subset of operations are supported and memory is constantly created and destroyed.
* [[Equations]]: Is a symbolic interface that allows you to manipulate matrices in a similar manor to Matlab/Octave. Can be precompiled and won't declare new memory if the input size doesn't change. It's a bit of a black box and the compiler isn't smart enough to pick the most efficient functions.
Example of compute the Kalman gain "K"
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
It's hard to say which interface is the best. If you are dealing with small matrices and need to write highly optimized code then ''Procedural'' is the way to go. For large matrices it doesn't really matter which one you use since the overhead is insignificant compared to the matrix operations. If you want to write something quickly then [[SimpleMatrix]] or [[Equations]] is the way to go. For those of you who are concerned about performance, I recommend coding it up first using SimpleMatrix or Equations then benchmarking to see if that code is a bottleneck. Much easier to debug that way.
[[Performance|Comparison of Interface Runtime Performance]]
== Matrix Types ==
EJML provides support for the following matrix types:
* Dense Real
* Dense Complex
* Fixed Sized Real
= Tutorials =
* [[Matlab to EJML|Matlab to EJML]]
* [[Complex Math|Complex Math]]
* [[Solving Linear Systems|Solving Linear Systems]]
* [[Matrix Decompositions|Matrix Decompositions]]
* [[Fixed Sized Matrices]]
* [[Customizing SimpleMatrix]]
* [[Random matrices, Matrix Features, and Matrix Norms]]
* [[Extract and Insert|Extracting and Inserting submatrices and vectors]]
* [[Input and Output|Matrix Input/Output]]
* [[Unit Testing]]
= Example Code =
The follow are code examples of common linear algebra problems intended to demonstrate different parts of EJML. In the table below it indicates which interface or interfaces the example uses.
{| class="wikitable" border="1" |
! Name !! Procedural !! SimpleMatrix !! Equations
|-
| [[Example Kalman Filter|Kalman Filter]] || X || X || X
|-
| [[Example Levenberg-Marquardt|Levenberg-Marquardt]] || X || ||
|-
| [[Example Principal Component Analysis|Principal Component Analysis]] || X || ||
|-
| [[Example Polynomial Fitting|Polynomial Fitting]] || X || ||
|-
| [[Example Polynomial Roots|Polynomial Roots]] || X || X || X
|}
= External References =
Want to learn more about how EJML works to write more effective code and employ more advanced techniques? Understand where EJML's logo comes from? The following books are recommended reading and made EJML's early development possible.
* Best introduction to the subject that balances clarity without sacrificing important implementation details:
** [http://www.amazon.com/gp/product/0470528338/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0470528338 Fundamentals of Matrix Computations by David S. Watkins]
* Classic reference book that tersely covers hundreds of algorithms
** [http://www.amazon.com/gp/product/0801854148/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0801854148 Matrix Computations by G. Golub and C. Van Loan]
* Popular book on linear algebra
** [http://www.amazon.com/gp/product/0030105676/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0030105676 Linear Algebra and Its Applications by Gilbert Strang]
Purchasing through these links will help EJML's developer buy high end ramen noodles.
fa3d9542b0bca502e4516fcaa2afc514acb9c13f
45
41
2015-03-22T05:38:06Z
Peter
1
wikitext
text/x-wiki
= The Basics =
What exactly is Efficient Java Matrix Library (EJML)? EJML is a Java library for performing standard linear algebra operations on dense matrices. Typically the list of standard operations is divided up unto basic (addition, subtraction, multiplication, ...etc), decompositions (LU, QR, SVD, ... etc), and solving linear systems. A complete list of its core functionality can be found on the [[Capabilities]] page.
This manual describes how to use and develop an application using EJML. Other questions, like how to build or include it in your project, is provided in the list below. If you have a question which isn't answered or is confusion feel free to post a question on the message board! Instructions on how to use EJML is primarily done in this manual through example, see below. The examples are selected from common real-world problems, such as Kalman filters. Some times the same example is provided in three different formats using one of the three interfaces provided in EJML to help you understand the differences.
* [[Download|Download and Building]]
* [[Frequently Asked Questions|Frequently Asked Questions]]
* [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
EJML is compatible with Java 1.6 and beyond.
== The Interfaces ==
A primary design goal of EJML was to provide users the capability to write both highly optimized code and easy to read/write code. Since it's hard to do this with a single API BoofCV provides three different ways to interact with it.
* [[Procedural]]: You have full access to all of EJML's capabilities, can select individual algorithms, and almost complete control over memory. The downside is it feels a bit like you're programming in assembly and it's tedious to have that much control over memory.
* [[SimpleMatrix]]: An object oriented API that allows you to connect multiple operations together using a flow strategy, which is much easier to read and write. Limited subset of operations are supported and memory is constantly created and destroyed.
* [[Equations]]: Is a symbolic interface that allows you to manipulate matrices in a similar manor to Matlab/Octave. Can be precompiled and won't declare new memory if the input size doesn't change. It's a bit of a black box and the compiler isn't smart enough to pick the most efficient functions.
Example of compute the Kalman gain "K"
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
It's hard to say which interface is the best. If you are dealing with small matrices and need to write highly optimized code then ''Procedural'' is the way to go. For large matrices it doesn't really matter which one you use since the overhead is insignificant compared to the matrix operations. If you want to write something quickly then [[SimpleMatrix]] or [[Equations]] is the way to go. For those of you who are concerned about performance, I recommend coding it up first using SimpleMatrix or Equations then benchmarking to see if that code is a bottleneck. Much easier to debug that way.
[[Performance|Comparison of Interface Runtime Performance]]
== Matrix Types ==
EJML provides support for the following matrix types:
* Dense Real
* Dense Complex
* Fixed Sized Real
= Tutorials =
* [[Matlab to EJML|Matlab to EJML]]
* [[Complex Math|Complex Math]]
* [[Solving Linear Systems|Solving Linear Systems]]
* [[Matrix Decompositions|Matrix Decompositions]]
* [[Fixed Sized Matrices]]
* [[Customizing SimpleMatrix]]
* [[Random matrices, Matrix Features, and Matrix Norms]]
* [[Extract and Insert|Extracting and Inserting submatrices and vectors]]
* [[Input and Output|Matrix Input/Output]]
* [[Unit Testing]]
= Example Code =
The follow are code examples of common linear algebra problems intended to demonstrate different parts of EJML. In the table below it indicates which interface or interfaces the example uses.
{| class="wikitable" border="1" |
! Name !! Procedural !! SimpleMatrix !! Equations
|-
| [[Example Kalman Filter|Kalman Filter]] || X || X || X
|-
| [[Example Levenberg-Marquardt|Levenberg-Marquardt]] || X || ||
|-
| [[Example Principal Component Analysis|Principal Component Analysis]] || X || ||
|-
| [[Example Polynomial Fitting|Polynomial Fitting]] || X || ||
|-
| [[Example Polynomial Roots|Polynomial Roots]] || X || ||
|}
= External References =
Want to learn more about how EJML works to write more effective code and employ more advanced techniques? Understand where EJML's logo comes from? The following books are recommended reading and made EJML's early development possible.
* Best introduction to the subject that balances clarity without sacrificing important implementation details:
** [http://www.amazon.com/gp/product/0470528338/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0470528338 Fundamentals of Matrix Computations by David S. Watkins]
* Classic reference book that tersely covers hundreds of algorithms
** [http://www.amazon.com/gp/product/0801854148/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0801854148 Matrix Computations by G. Golub and C. Van Loan]
* Popular book on linear algebra
** [http://www.amazon.com/gp/product/0030105676/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0030105676 Linear Algebra and Its Applications by Gilbert Strang]
Purchasing through these links will help EJML's developer buy high end ramen noodles.
29a958af76266d2a289698dd3e4ae3cad556d4d2
47
45
2015-03-22T05:50:55Z
Peter
1
wikitext
text/x-wiki
= The Basics =
What exactly is Efficient Java Matrix Library (EJML)? EJML is a Java library for performing standard linear algebra operations on dense matrices. Typically the list of standard operations is divided up unto basic (addition, subtraction, multiplication, ...etc), decompositions (LU, QR, SVD, ... etc), and solving linear systems. A complete list of its core functionality can be found on the [[Capabilities]] page.
This manual describes how to use and develop an application using EJML. Other questions, like how to build or include it in your project, is provided in the list below. If you have a question which isn't answered or is confusion feel free to post a question on the message board! Instructions on how to use EJML is primarily done in this manual through example, see below. The examples are selected from common real-world problems, such as Kalman filters. Some times the same example is provided in three different formats using one of the three interfaces provided in EJML to help you understand the differences.
* [[Download|Download and Building]]
* [[Frequently Asked Questions|Frequently Asked Questions]]
* [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
EJML is compatible with Java 1.6 and beyond.
== The Interfaces ==
A primary design goal of EJML was to provide users the capability to write both highly optimized code and easy to read/write code. Since it's hard to do this with a single API BoofCV provides three different ways to interact with it.
* [[Procedural]]: You have full access to all of EJML's capabilities, can select individual algorithms, and almost complete control over memory. The downside is it feels a bit like you're programming in assembly and it's tedious to have that much control over memory.
* [[SimpleMatrix]]: An object oriented API that allows you to connect multiple operations together using a flow strategy, which is much easier to read and write. Limited subset of operations are supported and memory is constantly created and destroyed.
* [[Equations]]: Is a symbolic interface that allows you to manipulate matrices in a similar manor to Matlab/Octave. Can be precompiled and won't declare new memory if the input size doesn't change. It's a bit of a black box and the compiler isn't smart enough to pick the most efficient functions.
Example of compute the Kalman gain "K"
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
It's hard to say which interface is the best. If you are dealing with small matrices and need to write highly optimized code then ''Procedural'' is the way to go. For large matrices it doesn't really matter which one you use since the overhead is insignificant compared to the matrix operations. If you want to write something quickly then [[SimpleMatrix]] or [[Equations]] is the way to go. For those of you who are concerned about performance, I recommend coding it up first using SimpleMatrix or Equations then benchmarking to see if that code is a bottleneck. Much easier to debug that way.
[[Performance|Comparison of Interface Runtime Performance]]
== Matrix Types ==
EJML provides support for the following matrix types:
* Dense Real
* Dense Complex
* Fixed Sized Real
= Tutorials =
* [[Matlab to EJML|Matlab to EJML]]
* [[Complex Math|Complex Math]]
* [[Solving Linear Systems|Solving Linear Systems]]
* [[Matrix Decompositions|Matrix Decompositions]]
* [[Fixed Sized Matrices]]
* [[Random matrices, Matrix Features, and Matrix Norms]]
* [[Extract and Insert|Extracting and Inserting submatrices and vectors]]
* [[Input and Output|Matrix Input/Output]]
* [[Unit Testing]]
= Example Code =
The follow are code examples of common linear algebra problems intended to demonstrate different parts of EJML. In the table below it indicates which interface or interfaces the example uses.
{| class="wikitable" border="1" |
! Name !! Procedural !! SimpleMatrix !! Equations
|-
| [[Example Kalman Filter|Kalman Filter]] || X || X || X
|-
| [[Example Levenberg-Marquardt|Levenberg-Marquardt]] || X || ||
|-
| [[Example Principal Component Analysis|Principal Component Analysis]] || X || ||
|-
| [[Example Polynomial Fitting|Polynomial Fitting]] || X || ||
|-
| [[Example Polynomial Roots|Polynomial Roots]] || X || ||
|-
| [[Example Customizing SimpleMatrix|Customizing SimpleMatrix]] || || X ||
|-
| [[Example Fixed Sized Matrices|Fixed Sized Matrices]] || X || ||
|}
= External References =
Want to learn more about how EJML works to write more effective code and employ more advanced techniques? Understand where EJML's logo comes from? The following books are recommended reading and made EJML's early development possible.
* Best introduction to the subject that balances clarity without sacrificing important implementation details:
** [http://www.amazon.com/gp/product/0470528338/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0470528338 Fundamentals of Matrix Computations by David S. Watkins]
* Classic reference book that tersely covers hundreds of algorithms
** [http://www.amazon.com/gp/product/0801854148/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0801854148 Matrix Computations by G. Golub and C. Van Loan]
* Popular book on linear algebra
** [http://www.amazon.com/gp/product/0030105676/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0030105676 Linear Algebra and Its Applications by Gilbert Strang]
Purchasing through these links will help EJML's developer buy high end ramen noodles.
307a8e4452c97acb5a7f4a073c45f9c1a932901a
Matlab to EJML
0
9
25
2015-03-22T03:45:39Z
Peter
1
Created page with "To help Matlab users quickly learn how to use EJML a list of equivalent functions is provided below Many functions in Matlab have equivalent or similar functions in EJML. To..."
wikitext
text/x-wiki
To help Matlab users quickly learn how to use EJML a list of equivalent functions is provided below
Many functions in Matlab have equivalent or similar functions in EJML. To help port Matlab code into EJML two list are provided for SimpleMatrix and the procedural API. If a function is not provided by SimpleMatrix it is probably provided by the more advanced procedural API.
Looking for a Matlab interface to use in Java? Check out the new EJML module Equations.
= Equations =
Equations is very similar to Matlab but there are a few differences. For a description of the syntax and list of available functions checkout the [[Equations]] tutorial.
= SimpleMatrix =
A subset of EJML's functionality is provided in [[SimpleMatrix]]. If SimpleMatrix does not provide the functionality you desire then look at the list of [[#Procedural]] functions below.
{| class="wikitable" align="center" style="font-size:120%; text-align:left; border-collapse:collapse;"
|-
! Matlab !! SimpleMatrix
|-
| eye(3) || SimpleMatrix.identity(3)
|-
| diag({{{[1 2 3]}}}) || SimpleMatrix.diag(1,2,3)
|-
| C(1,2) = 5 || A.set(0,1,5)
|-
| C(:) = A || C.set(A)
|-
| C(:) = 5 || C.set(5)
|-
| C(2,:) = [1,2,3] || C.setRow(1,0,1,2,3)
|-
| C(:,2) = [1,2,3] || C.setColumn(1,0,1,2,3)
|-
| C = A(2:4,3:8) || C = A.extractMatrix(1,4,2,8)
|-
| A(:,2:end) = B || A.insertIntoThis(0,1,B);
|-
| C = diag(A) || C = A.extractDiag()
|-
| C = [A,B] || C = A.combine(0,A.numCols(),B)
|-
| C = A' || C = A.transpose()
|-
| C = -A || C = A.negative()
|-
| C = A{{{*}}}B || C = A.mult(B)
|-
| C = A + B || C = A.plus(B)
|-
| C = A - B || C = A.minus(B)
|-
| C = 2{{{*}}}A || C = A.scale(2)
|-
| C = A / 2 || C = A.divide(2)
|-
| C = inv(A) || C = A.invert()
|-
| C = pinv(A) || C = A.pinv()
|-
| C = A \ B || C = A.solve(B)
|-
| C = trace(A) || C = A.trace()
|-
| det(A) || A.det()
|-
| C=kron(A,B) || C=A.kron(B)
|-
| norm(A,"fro") || A.normf()
|-
| max(abs(A(:))) || A.elementMaxAbs()
|-
| sum(A(:)) || A.elementSum()
|-
| rank(A) || A.svd(true).rank()
|-
| [U,S,V] = svd(A) || A.svd(false)
|-
| [U,S,V] = svd(A,0) || A.svd(true)
|-
| [V,L] = eig(A) || A.eig()
|}
= Procedural =
Functions and classes in the procedural interface use DenseMatrix64F as input. Since SimpleMatrix is a wrapper around DenseMatrix64F its internal matrix can be extracted and passed into any of these functions.
{| class="wikitable" align="center" style="font-size:120%; text-align:left; border-collapse:collapse;"
|-
! Matlab !! Procedural
|-
| eye(3) || CommonOps.identity(3)
|-
| C(1,2) = 5 || A.set(0,1,5)
|-
| C(:) = A || C.setTo(A)
|-
| C(2,:) = [1,2,3] || CommonOps.insert(new DenseMatrix64F(1,3,true,1,2,3),C,1,0)
|-
| C(:,2) = [1,2,3] || CommonOps.insert(new DenseMatrix64F(3,1,true,1,2,3),C,0,1)
|-
| C = A(2:4,3:8) || CommonOps.extract(A,1,4,2,8)
|-
| diag({{{[1 2 3]}}}) || CommonOps.diag(1,2,3)
|-
| C = A' || CommonOps.transpose(A,C)
|-
| A = A' || CommonOps.transpose(A)
|-
| A = -A || CommonOps.changeSign(A)
|-
| C = A {{{*}}} B || CommonOps.mult(A,B,C)
|-
| C = A .{{{*}}} B || CommonOps.elementMult(A,B,C)
|-
| A = A .{{{*}}} B || CommonOps.elementMult(A,B)
|-
| C = A ./ B || CommonOps.elementDiv(A,B,C)
|-
| A = A ./ B || CommonOps.elementDiv(A,B)
|-
| C = A + B || CommonOps.add(A,B,C)
|-
| C = A - B || CommonOps.sub(A,B,C)
|-
| C = 2 {{{*}}} A || CommonOps.scale(2,A,C)
|-
| A = 2 {{{*}}} A || CommonOps.scale(2,A)
|-
| C = A / 2 || CommonOps.divide(2,A,C)
|-
| A = A / 2 || CommonOps.divide(2,A)
|-
| C = inv(A) || CommonOps.invert(A,C)
|-
| A = inv(A) || CommonOps.invert(A)
|-
| C = pinv(A) || CommonOps.pinv(A)
|-
| C = trace(A) || C = CommonOps.trace(A)
|-
| C = det(A) || C = CommonOps.det(A)
|-
| C=kron(A,B) || CommonOps.kron(A,B,C)
|-
| B=rref(A) || B = CommonOps.rref(A,-1,null)
|-
| norm(A,"fro") || NormOps.normf(A)
|-
| norm(A,1) || NormOps.normP1(A)
|-
| norm(A,2) || NormOps.normP2(A)
|-
| norm(A,Inf) || NormOps.normPInf(A)
|-
| max(abs(A(:))) || CommonOps.elementMaxAbs(A)
|-
| sum(A(:)) || CommonOps.elementSum(A)
|-
| rank(A,tol) || svd.decompose(A); SingularOps.rank(svd,tol)
|-
| [U,S,V] = svd(A) || DecompositionFactory.svd(A.numRows,A.numCols,true,true,false)
|-
| || SingularOps.descendingOrder(U,false,S,V,false)
|-
| [U,S,V] = svd(A,0) || DecompositionFactory.svd(A.numRows,A.numCols,true,true,true)
|-
| || SingularOps.descendingOrder(U,false,S,V,false)
|-
| S = svd(A) || DecompositionFactory.svd(A.numRows,A.numCols,false,false,true)
|-
| [V,D] = eig(A) || eig = DecompositionFactory.eig(A.numCols); eig.decompose(A)
|-
| || V = EigenOps.createMatrixV(eig); D = EigenOps.createMatrixD(eig)
|-
| [Q,R] = qr(A) || decomp = DecompositionFactory.qr(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,false); R = decomp.getR(null,false)
|-
| [Q,R] = qr(A,0) || decomp = DecompositionFactory.qr(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,true); R = decomp.getR(null,true)
|-
| [Q,R,P] = qr(A) || decomp = DecompositionFactory.qrp(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,false); R = decomp.getR(null,false)
|-
| || P = decomp.getPivotMatrix(null)
|-
| [Q,R,P] = qr(A,0) || decomp = DecompositionFactory.qrp(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,true); R = decomp.getR(null,true)
|-
| || P = decomp.getPivotMatrix(null)
|-
| R = chol(A) || DecompositionFactory.chol(A.numCols,false)
|-
| [L,U,P] = lu(A) ||DecompositionFactory.lu(A.numCols)
|}
53484438c3a5bb1df958b471139ebe16f1a54cdf
26
25
2015-03-22T03:46:01Z
Peter
1
wikitext
text/x-wiki
To help Matlab users quickly learn how to use EJML a list of equivalent functions is provided below
Many functions in Matlab have equivalent or similar functions in EJML. To help port Matlab code into EJML two list are provided for SimpleMatrix and the procedural API. If a function is not provided by SimpleMatrix it is probably provided by the more advanced procedural API.
Looking for a Matlab interface to use in Java? Check out the new EJML module Equations.
= Equations =
Equations is very similar to Matlab but there are a few differences. For a description of the syntax and list of available functions checkout the [[Equations]] tutorial.
= SimpleMatrix =
A subset of EJML's functionality is provided in [[SimpleMatrix]]. If SimpleMatrix does not provide the functionality you desire then look at the list of [[Procedural]] functions below.
{| class="wikitable" align="center" style="font-size:120%; text-align:left; border-collapse:collapse;"
|-
! Matlab !! SimpleMatrix
|-
| eye(3) || SimpleMatrix.identity(3)
|-
| diag({{{[1 2 3]}}}) || SimpleMatrix.diag(1,2,3)
|-
| C(1,2) = 5 || A.set(0,1,5)
|-
| C(:) = A || C.set(A)
|-
| C(:) = 5 || C.set(5)
|-
| C(2,:) = [1,2,3] || C.setRow(1,0,1,2,3)
|-
| C(:,2) = [1,2,3] || C.setColumn(1,0,1,2,3)
|-
| C = A(2:4,3:8) || C = A.extractMatrix(1,4,2,8)
|-
| A(:,2:end) = B || A.insertIntoThis(0,1,B);
|-
| C = diag(A) || C = A.extractDiag()
|-
| C = [A,B] || C = A.combine(0,A.numCols(),B)
|-
| C = A' || C = A.transpose()
|-
| C = -A || C = A.negative()
|-
| C = A{{{*}}}B || C = A.mult(B)
|-
| C = A + B || C = A.plus(B)
|-
| C = A - B || C = A.minus(B)
|-
| C = 2{{{*}}}A || C = A.scale(2)
|-
| C = A / 2 || C = A.divide(2)
|-
| C = inv(A) || C = A.invert()
|-
| C = pinv(A) || C = A.pinv()
|-
| C = A \ B || C = A.solve(B)
|-
| C = trace(A) || C = A.trace()
|-
| det(A) || A.det()
|-
| C=kron(A,B) || C=A.kron(B)
|-
| norm(A,"fro") || A.normf()
|-
| max(abs(A(:))) || A.elementMaxAbs()
|-
| sum(A(:)) || A.elementSum()
|-
| rank(A) || A.svd(true).rank()
|-
| [U,S,V] = svd(A) || A.svd(false)
|-
| [U,S,V] = svd(A,0) || A.svd(true)
|-
| [V,L] = eig(A) || A.eig()
|}
= Procedural =
Functions and classes in the procedural interface use DenseMatrix64F as input. Since SimpleMatrix is a wrapper around DenseMatrix64F its internal matrix can be extracted and passed into any of these functions.
{| class="wikitable" align="center" style="font-size:120%; text-align:left; border-collapse:collapse;"
|-
! Matlab !! Procedural
|-
| eye(3) || CommonOps.identity(3)
|-
| C(1,2) = 5 || A.set(0,1,5)
|-
| C(:) = A || C.setTo(A)
|-
| C(2,:) = [1,2,3] || CommonOps.insert(new DenseMatrix64F(1,3,true,1,2,3),C,1,0)
|-
| C(:,2) = [1,2,3] || CommonOps.insert(new DenseMatrix64F(3,1,true,1,2,3),C,0,1)
|-
| C = A(2:4,3:8) || CommonOps.extract(A,1,4,2,8)
|-
| diag({{{[1 2 3]}}}) || CommonOps.diag(1,2,3)
|-
| C = A' || CommonOps.transpose(A,C)
|-
| A = A' || CommonOps.transpose(A)
|-
| A = -A || CommonOps.changeSign(A)
|-
| C = A {{{*}}} B || CommonOps.mult(A,B,C)
|-
| C = A .{{{*}}} B || CommonOps.elementMult(A,B,C)
|-
| A = A .{{{*}}} B || CommonOps.elementMult(A,B)
|-
| C = A ./ B || CommonOps.elementDiv(A,B,C)
|-
| A = A ./ B || CommonOps.elementDiv(A,B)
|-
| C = A + B || CommonOps.add(A,B,C)
|-
| C = A - B || CommonOps.sub(A,B,C)
|-
| C = 2 {{{*}}} A || CommonOps.scale(2,A,C)
|-
| A = 2 {{{*}}} A || CommonOps.scale(2,A)
|-
| C = A / 2 || CommonOps.divide(2,A,C)
|-
| A = A / 2 || CommonOps.divide(2,A)
|-
| C = inv(A) || CommonOps.invert(A,C)
|-
| A = inv(A) || CommonOps.invert(A)
|-
| C = pinv(A) || CommonOps.pinv(A)
|-
| C = trace(A) || C = CommonOps.trace(A)
|-
| C = det(A) || C = CommonOps.det(A)
|-
| C=kron(A,B) || CommonOps.kron(A,B,C)
|-
| B=rref(A) || B = CommonOps.rref(A,-1,null)
|-
| norm(A,"fro") || NormOps.normf(A)
|-
| norm(A,1) || NormOps.normP1(A)
|-
| norm(A,2) || NormOps.normP2(A)
|-
| norm(A,Inf) || NormOps.normPInf(A)
|-
| max(abs(A(:))) || CommonOps.elementMaxAbs(A)
|-
| sum(A(:)) || CommonOps.elementSum(A)
|-
| rank(A,tol) || svd.decompose(A); SingularOps.rank(svd,tol)
|-
| [U,S,V] = svd(A) || DecompositionFactory.svd(A.numRows,A.numCols,true,true,false)
|-
| || SingularOps.descendingOrder(U,false,S,V,false)
|-
| [U,S,V] = svd(A,0) || DecompositionFactory.svd(A.numRows,A.numCols,true,true,true)
|-
| || SingularOps.descendingOrder(U,false,S,V,false)
|-
| S = svd(A) || DecompositionFactory.svd(A.numRows,A.numCols,false,false,true)
|-
| [V,D] = eig(A) || eig = DecompositionFactory.eig(A.numCols); eig.decompose(A)
|-
| || V = EigenOps.createMatrixV(eig); D = EigenOps.createMatrixD(eig)
|-
| [Q,R] = qr(A) || decomp = DecompositionFactory.qr(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,false); R = decomp.getR(null,false)
|-
| [Q,R] = qr(A,0) || decomp = DecompositionFactory.qr(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,true); R = decomp.getR(null,true)
|-
| [Q,R,P] = qr(A) || decomp = DecompositionFactory.qrp(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,false); R = decomp.getR(null,false)
|-
| || P = decomp.getPivotMatrix(null)
|-
| [Q,R,P] = qr(A,0) || decomp = DecompositionFactory.qrp(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,true); R = decomp.getR(null,true)
|-
| || P = decomp.getPivotMatrix(null)
|-
| R = chol(A) || DecompositionFactory.chol(A.numCols,false)
|-
| [L,U,P] = lu(A) ||DecompositionFactory.lu(A.numCols)
|}
2cbde4dae43b9446c8fc86d19f264a6e48e9fc1b
27
26
2015-03-22T03:48:48Z
Peter
1
wikitext
text/x-wiki
To help Matlab users quickly learn how to use EJML a list of equivalent functions is provided in the sections below. Keep in mind that directly porting Matlab code will often result in inefficient code. In Matlab for loops are very expensive and often extracting sub-matrices is the preferred method. Java like C++ can handle for loops much better and extracting and inserting a matrix can be much less efficient than direct manipulation of the matrix itself.
= Equations =
Equations is very similar to Matlab but there are a few differences. For a description of the syntax and list of available functions checkout the [[Equations]] tutorial.
= SimpleMatrix =
A subset of EJML's functionality is provided in [[SimpleMatrix]]. If SimpleMatrix does not provide the functionality you desire then look at the list of [[#Procedural]] functions below.
{| class="wikitable" align="center" style="font-size:120%; text-align:left; border-collapse:collapse;"
|-
! Matlab !! SimpleMatrix
|-
| eye(3) || SimpleMatrix.identity(3)
|-
| diag({{{[1 2 3]}}}) || SimpleMatrix.diag(1,2,3)
|-
| C(1,2) = 5 || A.set(0,1,5)
|-
| C(:) = A || C.set(A)
|-
| C(:) = 5 || C.set(5)
|-
| C(2,:) = [1,2,3] || C.setRow(1,0,1,2,3)
|-
| C(:,2) = [1,2,3] || C.setColumn(1,0,1,2,3)
|-
| C = A(2:4,3:8) || C = A.extractMatrix(1,4,2,8)
|-
| A(:,2:end) = B || A.insertIntoThis(0,1,B);
|-
| C = diag(A) || C = A.extractDiag()
|-
| C = [A,B] || C = A.combine(0,A.numCols(),B)
|-
| C = A' || C = A.transpose()
|-
| C = -A || C = A.negative()
|-
| C = A{{{*}}}B || C = A.mult(B)
|-
| C = A + B || C = A.plus(B)
|-
| C = A - B || C = A.minus(B)
|-
| C = 2{{{*}}}A || C = A.scale(2)
|-
| C = A / 2 || C = A.divide(2)
|-
| C = inv(A) || C = A.invert()
|-
| C = pinv(A) || C = A.pinv()
|-
| C = A \ B || C = A.solve(B)
|-
| C = trace(A) || C = A.trace()
|-
| det(A) || A.det()
|-
| C=kron(A,B) || C=A.kron(B)
|-
| norm(A,"fro") || A.normf()
|-
| max(abs(A(:))) || A.elementMaxAbs()
|-
| sum(A(:)) || A.elementSum()
|-
| rank(A) || A.svd(true).rank()
|-
| [U,S,V] = svd(A) || A.svd(false)
|-
| [U,S,V] = svd(A,0) || A.svd(true)
|-
| [V,L] = eig(A) || A.eig()
|}
= Procedural =
Functions and classes in the procedural interface use DenseMatrix64F as input. Since SimpleMatrix is a wrapper around DenseMatrix64F its internal matrix can be extracted and passed into any of these functions.
{| class="wikitable" align="center" style="font-size:120%; text-align:left; border-collapse:collapse;"
|-
! Matlab !! Procedural
|-
| eye(3) || CommonOps.identity(3)
|-
| C(1,2) = 5 || A.set(0,1,5)
|-
| C(:) = A || C.setTo(A)
|-
| C(2,:) = [1,2,3] || CommonOps.insert(new DenseMatrix64F(1,3,true,1,2,3),C,1,0)
|-
| C(:,2) = [1,2,3] || CommonOps.insert(new DenseMatrix64F(3,1,true,1,2,3),C,0,1)
|-
| C = A(2:4,3:8) || CommonOps.extract(A,1,4,2,8)
|-
| diag({{{[1 2 3]}}}) || CommonOps.diag(1,2,3)
|-
| C = A' || CommonOps.transpose(A,C)
|-
| A = A' || CommonOps.transpose(A)
|-
| A = -A || CommonOps.changeSign(A)
|-
| C = A {{{*}}} B || CommonOps.mult(A,B,C)
|-
| C = A .{{{*}}} B || CommonOps.elementMult(A,B,C)
|-
| A = A .{{{*}}} B || CommonOps.elementMult(A,B)
|-
| C = A ./ B || CommonOps.elementDiv(A,B,C)
|-
| A = A ./ B || CommonOps.elementDiv(A,B)
|-
| C = A + B || CommonOps.add(A,B,C)
|-
| C = A - B || CommonOps.sub(A,B,C)
|-
| C = 2 {{{*}}} A || CommonOps.scale(2,A,C)
|-
| A = 2 {{{*}}} A || CommonOps.scale(2,A)
|-
| C = A / 2 || CommonOps.divide(2,A,C)
|-
| A = A / 2 || CommonOps.divide(2,A)
|-
| C = inv(A) || CommonOps.invert(A,C)
|-
| A = inv(A) || CommonOps.invert(A)
|-
| C = pinv(A) || CommonOps.pinv(A)
|-
| C = trace(A) || C = CommonOps.trace(A)
|-
| C = det(A) || C = CommonOps.det(A)
|-
| C=kron(A,B) || CommonOps.kron(A,B,C)
|-
| B=rref(A) || B = CommonOps.rref(A,-1,null)
|-
| norm(A,"fro") || NormOps.normf(A)
|-
| norm(A,1) || NormOps.normP1(A)
|-
| norm(A,2) || NormOps.normP2(A)
|-
| norm(A,Inf) || NormOps.normPInf(A)
|-
| max(abs(A(:))) || CommonOps.elementMaxAbs(A)
|-
| sum(A(:)) || CommonOps.elementSum(A)
|-
| rank(A,tol) || svd.decompose(A); SingularOps.rank(svd,tol)
|-
| [U,S,V] = svd(A) || DecompositionFactory.svd(A.numRows,A.numCols,true,true,false)
|-
| || SingularOps.descendingOrder(U,false,S,V,false)
|-
| [U,S,V] = svd(A,0) || DecompositionFactory.svd(A.numRows,A.numCols,true,true,true)
|-
| || SingularOps.descendingOrder(U,false,S,V,false)
|-
| S = svd(A) || DecompositionFactory.svd(A.numRows,A.numCols,false,false,true)
|-
| [V,D] = eig(A) || eig = DecompositionFactory.eig(A.numCols); eig.decompose(A)
|-
| || V = EigenOps.createMatrixV(eig); D = EigenOps.createMatrixD(eig)
|-
| [Q,R] = qr(A) || decomp = DecompositionFactory.qr(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,false); R = decomp.getR(null,false)
|-
| [Q,R] = qr(A,0) || decomp = DecompositionFactory.qr(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,true); R = decomp.getR(null,true)
|-
| [Q,R,P] = qr(A) || decomp = DecompositionFactory.qrp(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,false); R = decomp.getR(null,false)
|-
| || P = decomp.getPivotMatrix(null)
|-
| [Q,R,P] = qr(A,0) || decomp = DecompositionFactory.qrp(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,true); R = decomp.getR(null,true)
|-
| || P = decomp.getPivotMatrix(null)
|-
| R = chol(A) || DecompositionFactory.chol(A.numCols,false)
|-
| [L,U,P] = lu(A) ||DecompositionFactory.lu(A.numCols)
|}
06361bf40dabdc852dd623157d65517d3d459880
28
27
2015-03-22T03:49:38Z
Peter
1
wikitext
text/x-wiki
To help Matlab users quickly learn how to use EJML a list of equivalent functions is provided in the sections below. Keep in mind that directly porting Matlab code will often result in inefficient code. In Matlab for loops are very expensive and often extracting sub-matrices is the preferred method. Java like C++ can handle for loops much better and extracting and inserting a matrix can be much less efficient than direct manipulation of the matrix itself.
= Equations =
Equations is very similar to Matlab but there are a few differences. For a description of the syntax and list of available functions checkout the [[Equations]] tutorial.
= SimpleMatrix =
A subset of EJML's functionality is provided in [[SimpleMatrix]]. If SimpleMatrix does not provide the functionality you desire then look at the list of [[#Procedural]] functions below.
{| class="wikitable" align="center" style="font-size:120%; text-align:left; border-collapse:collapse;"
|-
! Matlab !! SimpleMatrix
|-
| eye(3) || SimpleMatrix.identity(3)
|-
| diag([1 2 3]) || SimpleMatrix.diag(1,2,3)
|-
| C(1,2) = 5 || A.set(0,1,5)
|-
| C(:) = A || C.set(A)
|-
| C(:) = 5 || C.set(5)
|-
| C(2,:) = [1,2,3] || C.setRow(1,0,1,2,3)
|-
| C(:,2) = [1,2,3] || C.setColumn(1,0,1,2,3)
|-
| C = A(2:4,3:8) || C = A.extractMatrix(1,4,2,8)
|-
| A(:,2:end) = B || A.insertIntoThis(0,1,B);
|-
| C = diag(A) || C = A.extractDiag()
|-
| C = [A,B] || C = A.combine(0,A.numCols(),B)
|-
| C = A' || C = A.transpose()
|-
| C = -A || C = A.negative()
|-
| C = A{{{*}}}B || C = A.mult(B)
|-
| C = A + B || C = A.plus(B)
|-
| C = A - B || C = A.minus(B)
|-
| C = 2{{{*}}}A || C = A.scale(2)
|-
| C = A / 2 || C = A.divide(2)
|-
| C = inv(A) || C = A.invert()
|-
| C = pinv(A) || C = A.pinv()
|-
| C = A \ B || C = A.solve(B)
|-
| C = trace(A) || C = A.trace()
|-
| det(A) || A.det()
|-
| C=kron(A,B) || C=A.kron(B)
|-
| norm(A,"fro") || A.normf()
|-
| max(abs(A(:))) || A.elementMaxAbs()
|-
| sum(A(:)) || A.elementSum()
|-
| rank(A) || A.svd(true).rank()
|-
| [U,S,V] = svd(A) || A.svd(false)
|-
| [U,S,V] = svd(A,0) || A.svd(true)
|-
| [V,L] = eig(A) || A.eig()
|}
= Procedural =
Functions and classes in the procedural interface use DenseMatrix64F as input. Since SimpleMatrix is a wrapper around DenseMatrix64F its internal matrix can be extracted and passed into any of these functions.
{| class="wikitable" align="center" style="font-size:120%; text-align:left; border-collapse:collapse;"
|-
! Matlab !! Procedural
|-
| eye(3) || CommonOps.identity(3)
|-
| C(1,2) = 5 || A.set(0,1,5)
|-
| C(:) = A || C.setTo(A)
|-
| C(2,:) = [1,2,3] || CommonOps.insert(new DenseMatrix64F(1,3,true,1,2,3),C,1,0)
|-
| C(:,2) = [1,2,3] || CommonOps.insert(new DenseMatrix64F(3,1,true,1,2,3),C,0,1)
|-
| C = A(2:4,3:8) || CommonOps.extract(A,1,4,2,8)
|-
| diag([1 2 3]) || CommonOps.diag(1,2,3)
|-
| C = A' || CommonOps.transpose(A,C)
|-
| A = A' || CommonOps.transpose(A)
|-
| A = -A || CommonOps.changeSign(A)
|-
| C = A {{{*}}} B || CommonOps.mult(A,B,C)
|-
| C = A .{{{*}}} B || CommonOps.elementMult(A,B,C)
|-
| A = A .{{{*}}} B || CommonOps.elementMult(A,B)
|-
| C = A ./ B || CommonOps.elementDiv(A,B,C)
|-
| A = A ./ B || CommonOps.elementDiv(A,B)
|-
| C = A + B || CommonOps.add(A,B,C)
|-
| C = A - B || CommonOps.sub(A,B,C)
|-
| C = 2 {{{*}}} A || CommonOps.scale(2,A,C)
|-
| A = 2 {{{*}}} A || CommonOps.scale(2,A)
|-
| C = A / 2 || CommonOps.divide(2,A,C)
|-
| A = A / 2 || CommonOps.divide(2,A)
|-
| C = inv(A) || CommonOps.invert(A,C)
|-
| A = inv(A) || CommonOps.invert(A)
|-
| C = pinv(A) || CommonOps.pinv(A)
|-
| C = trace(A) || C = CommonOps.trace(A)
|-
| C = det(A) || C = CommonOps.det(A)
|-
| C=kron(A,B) || CommonOps.kron(A,B,C)
|-
| B=rref(A) || B = CommonOps.rref(A,-1,null)
|-
| norm(A,"fro") || NormOps.normf(A)
|-
| norm(A,1) || NormOps.normP1(A)
|-
| norm(A,2) || NormOps.normP2(A)
|-
| norm(A,Inf) || NormOps.normPInf(A)
|-
| max(abs(A(:))) || CommonOps.elementMaxAbs(A)
|-
| sum(A(:)) || CommonOps.elementSum(A)
|-
| rank(A,tol) || svd.decompose(A); SingularOps.rank(svd,tol)
|-
| [U,S,V] = svd(A) || DecompositionFactory.svd(A.numRows,A.numCols,true,true,false)
|-
| || SingularOps.descendingOrder(U,false,S,V,false)
|-
| [U,S,V] = svd(A,0) || DecompositionFactory.svd(A.numRows,A.numCols,true,true,true)
|-
| || SingularOps.descendingOrder(U,false,S,V,false)
|-
| S = svd(A) || DecompositionFactory.svd(A.numRows,A.numCols,false,false,true)
|-
| [V,D] = eig(A) || eig = DecompositionFactory.eig(A.numCols); eig.decompose(A)
|-
| || V = EigenOps.createMatrixV(eig); D = EigenOps.createMatrixD(eig)
|-
| [Q,R] = qr(A) || decomp = DecompositionFactory.qr(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,false); R = decomp.getR(null,false)
|-
| [Q,R] = qr(A,0) || decomp = DecompositionFactory.qr(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,true); R = decomp.getR(null,true)
|-
| [Q,R,P] = qr(A) || decomp = DecompositionFactory.qrp(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,false); R = decomp.getR(null,false)
|-
| || P = decomp.getPivotMatrix(null)
|-
| [Q,R,P] = qr(A,0) || decomp = DecompositionFactory.qrp(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,true); R = decomp.getR(null,true)
|-
| || P = decomp.getPivotMatrix(null)
|-
| R = chol(A) || DecompositionFactory.chol(A.numCols,false)
|-
| [L,U,P] = lu(A) ||DecompositionFactory.lu(A.numCols)
|}
7f2f4a5048de66e909b08f48e6e36e96e9da0c5d
29
28
2015-03-22T03:52:20Z
Peter
1
wikitext
text/x-wiki
To help Matlab users quickly learn how to use EJML a list of equivalent functions is provided in the sections below. Keep in mind that directly porting Matlab code will often result in inefficient code. In Matlab for loops are very expensive and often extracting sub-matrices is the preferred method. Java like C++ can handle for loops much better and extracting and inserting a matrix can be much less efficient than direct manipulation of the matrix itself.
= Equations =
Equations is very similar to Matlab but there are a few differences. For a description of the syntax and list of available functions checkout the [[Equations]] tutorial.
= SimpleMatrix =
A subset of EJML's functionality is provided in [[SimpleMatrix]]. If SimpleMatrix does not provide the functionality you desire then look at the list of [[#Procedural]] functions below.
{| class="wikitable" align="center" style="font-size:120%; text-align:left; border-collapse:collapse;"
|-
! Matlab !! SimpleMatrix
|-
| eye(3) || SimpleMatrix.identity(3)
|-
| diag([1 2 3]) || SimpleMatrix.diag(1,2,3)
|-
| C(1,2) = 5 || A.set(0,1,5)
|-
| C(:) = A || C.set(A)
|-
| C(:) = 5 || C.set(5)
|-
| C(2,:) = [1,2,3] || C.setRow(1,0,1,2,3)
|-
| C(:,2) = [1,2,3] || C.setColumn(1,0,1,2,3)
|-
| C = A(2:4,3:8) || C = A.extractMatrix(1,4,2,8)
|-
| A(:,2:end) = B || A.insertIntoThis(0,1,B);
|-
| C = diag(A) || C = A.extractDiag()
|-
| C = [A,B] || C = A.combine(0,A.numCols(),B)
|-
| C = A' || C = A.transpose()
|-
| C = -A || C = A.negative()
|-
| C = A*B || C = A.mult(B)
|-
| C = A + B || C = A.plus(B)
|-
| C = A - B || C = A.minus(B)
|-
| C = 2*A || C = A.scale(2)
|-
| C = A / 2 || C = A.divide(2)
|-
| C = inv(A) || C = A.invert()
|-
| C = pinv(A) || C = A.pinv()
|-
| C = A \ B || C = A.solve(B)
|-
| C = trace(A) || C = A.trace()
|-
| det(A) || A.det()
|-
| C=kron(A,B) || C=A.kron(B)
|-
| norm(A,"fro") || A.normf()
|-
| max(abs(A(:))) || A.elementMaxAbs()
|-
| sum(A(:)) || A.elementSum()
|-
| rank(A) || A.svd(true).rank()
|-
| [U,S,V] = svd(A) || A.svd(false)
|-
| [U,S,V] = svd(A,0) || A.svd(true)
|-
| [V,L] = eig(A) || A.eig()
|}
= Procedural =
Functions and classes in the procedural interface use DenseMatrix64F as input. Since SimpleMatrix is a wrapper around DenseMatrix64F its internal matrix can be extracted and passed into any of these functions.
{| class="wikitable" align="center" style="font-size:120%; text-align:left; border-collapse:collapse;"
|-
! Matlab !! Procedural
|-
| eye(3) || CommonOps.identity(3)
|-
| C(1,2) = 5 || A.set(0,1,5)
|-
| C(:) = A || C.setTo(A)
|-
| C(2,:) = [1,2,3] || CommonOps.insert(new DenseMatrix64F(1,3,true,1,2,3),C,1,0)
|-
| C(:,2) = [1,2,3] || CommonOps.insert(new DenseMatrix64F(3,1,true,1,2,3),C,0,1)
|-
| C = A(2:4,3:8) || CommonOps.extract(A,1,4,2,8)
|-
| diag([1 2 3]) || CommonOps.diag(1,2,3)
|-
| C = A' || CommonOps.transpose(A,C)
|-
| A = A' || CommonOps.transpose(A)
|-
| A = -A || CommonOps.changeSign(A)
|-
| C = A * B || CommonOps.mult(A,B,C)
|-
| C = A .* B || CommonOps.elementMult(A,B,C)
|-
| A = A .* B || CommonOps.elementMult(A,B)
|-
| C = A ./ B || CommonOps.elementDiv(A,B,C)
|-
| A = A ./ B || CommonOps.elementDiv(A,B)
|-
| C = A + B || CommonOps.add(A,B,C)
|-
| C = A - B || CommonOps.sub(A,B,C)
|-
| C = 2 * A || CommonOps.scale(2,A,C)
|-
| A = 2 * A || CommonOps.scale(2,A)
|-
| C = A / 2 || CommonOps.divide(2,A,C)
|-
| A = A / 2 || CommonOps.divide(2,A)
|-
| C = inv(A) || CommonOps.invert(A,C)
|-
| A = inv(A) || CommonOps.invert(A)
|-
| C = pinv(A) || CommonOps.pinv(A)
|-
| C = trace(A) || C = CommonOps.trace(A)
|-
| C = det(A) || C = CommonOps.det(A)
|-
| C=kron(A,B) || CommonOps.kron(A,B,C)
|-
| B=rref(A) || B = CommonOps.rref(A,-1,null)
|-
| norm(A,"fro") || NormOps.normf(A)
|-
| norm(A,1) || NormOps.normP1(A)
|-
| norm(A,2) || NormOps.normP2(A)
|-
| norm(A,Inf) || NormOps.normPInf(A)
|-
| max(abs(A(:))) || CommonOps.elementMaxAbs(A)
|-
| sum(A(:)) || CommonOps.elementSum(A)
|-
| rank(A,tol) || svd.decompose(A); SingularOps.rank(svd,tol)
|-
| [U,S,V] = svd(A) || DecompositionFactory.svd(A.numRows,A.numCols,true,true,false)
|-
| || SingularOps.descendingOrder(U,false,S,V,false)
|-
| [U,S,V] = svd(A,0) || DecompositionFactory.svd(A.numRows,A.numCols,true,true,true)
|-
| || SingularOps.descendingOrder(U,false,S,V,false)
|-
| S = svd(A) || DecompositionFactory.svd(A.numRows,A.numCols,false,false,true)
|-
| [V,D] = eig(A) || eig = DecompositionFactory.eig(A.numCols); eig.decompose(A)
|-
| || V = EigenOps.createMatrixV(eig); D = EigenOps.createMatrixD(eig)
|-
| [Q,R] = qr(A) || decomp = DecompositionFactory.qr(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,false); R = decomp.getR(null,false)
|-
| [Q,R] = qr(A,0) || decomp = DecompositionFactory.qr(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,true); R = decomp.getR(null,true)
|-
| [Q,R,P] = qr(A) || decomp = DecompositionFactory.qrp(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,false); R = decomp.getR(null,false)
|-
| || P = decomp.getPivotMatrix(null)
|-
| [Q,R,P] = qr(A,0) || decomp = DecompositionFactory.qrp(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,true); R = decomp.getR(null,true)
|-
| || P = decomp.getPivotMatrix(null)
|-
| R = chol(A) || DecompositionFactory.chol(A.numCols,false)
|-
| [L,U,P] = lu(A) ||DecompositionFactory.lu(A.numCols)
|}
36aa6ece4ba4135636ad383ba875993e55dec89f
Example Kalman Filter
0
10
30
2015-03-22T04:01:03Z
Peter
1
Created page with "= Introduction = Here are three examples that demonstrate how a [http://en.wikipedia.org/wiki/Kalman_filter Kalman filter]] can be created using different API's in EJML. Eac..."
wikitext
text/x-wiki
= Introduction =
Here are three examples that demonstrate how a [http://en.wikipedia.org/wiki/Kalman_filter Kalman filter]] can be created using different API's in EJML. Each API has different advantages and disadvantages. High level interfaces tend to be easier to use, but sacrifice efficiency. The intent of this article is to illustrate this trend empirically.
Performance Summary:
{| class="wikitable"
! API !! Execution Time (ms)
|-
| SimpleMatrix || 1875
|-
| Operations || 1280
|-
| Equations || 1698
|}
Direct Links:
* [[#SimpleMatrix_Example|SimpleMatrix]]
* [[#Operations_Example|Operations]]
* [[#Equations_Example|Equations]]
All example code is included in EJML's source code directory. You can also view them in GitHub:
[https://github.com/lessthanoptimal/ejml/tree/master/examples GitHub Example Code]
'''NOTE:''' While the Kalman filter code below is fully functional and will work well in most applications, it might not be the best. Other variants seek to improve stability and/or avoid the matrix inversion. It's worth noting that some people say you should never invert the matrix in a Kalman filter. There are applications, such as target tracking, where matrix inversion is required.
== SimpleMatrix Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter implemented using SimpleMatrix. The code tends to be easier to
* read and write, but the performance is degraded due to excessive creation/destruction of
* memory and the use of more generic algorithms. This also demonstrates how code can be
* seamlessly implemented using both SimpleMatrix and DenseMatrix64F. This allows code
* to be quickly prototyped or to be written either by novices or experts.
*
* @author Peter Abeles
*/
public class KalmanFilterSimple implements KalmanFilter{
// kinematics description
private SimpleMatrix F;
private SimpleMatrix Q;
private SimpleMatrix H;
// sytem state estimate
private SimpleMatrix x;
private SimpleMatrix P;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
this.F = new SimpleMatrix(F);
this.Q = new SimpleMatrix(Q);
this.H = new SimpleMatrix(H);
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x = new SimpleMatrix(x);
this.P = new SimpleMatrix(P);
}
@Override
public void predict() {
// x = F x
x = F.mult(x);
// P = F P F' + Q
P = F.mult(P).mult(F.transpose()).plus(Q);
}
@Override
public void update(DenseMatrix64F _z, DenseMatrix64F _R) {
// a fast way to make the matrices usable by SimpleMatrix
SimpleMatrix z = SimpleMatrix.wrap(_z);
SimpleMatrix R = SimpleMatrix.wrap(_R);
// y = z - H x
SimpleMatrix y = z.minus(H.mult(x));
// S = H P H' + R
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
// K = PH'S^(-1)
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
// x = x + Ky
x = x.plus(K.mult(y));
// P = (I-kH)P = P - KHP
P = P.minus(K.mult(H).mult(P));
}
@Override
public DenseMatrix64F getState() {
return x.getMatrix();
}
@Override
public DenseMatrix64F getCovariance() {
return P.getMatrix();
}
}
</syntaxhighlight>
== Operations Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter that is implemented using the operations API, which is procedural. Much of the excessive
* memory creation/destruction has been reduced from the KalmanFilterSimple. A specialized solver is
* under to invert the SPD matrix.
*
* @author Peter Abeles
*/
public class KalmanFilterOperations implements KalmanFilter{
// kinematics description
private DenseMatrix64F F;
private DenseMatrix64F Q;
private DenseMatrix64F H;
// system state estimate
private DenseMatrix64F x;
private DenseMatrix64F P;
// these are predeclared for efficiency reasons
private DenseMatrix64F a,b;
private DenseMatrix64F y,S,S_inv,c,d;
private DenseMatrix64F K;
private LinearSolver<DenseMatrix64F> solver;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
this.F = F;
this.Q = Q;
this.H = H;
int dimenX = F.numCols;
int dimenZ = H.numRows;
a = new DenseMatrix64F(dimenX,1);
b = new DenseMatrix64F(dimenX,dimenX);
y = new DenseMatrix64F(dimenZ,1);
S = new DenseMatrix64F(dimenZ,dimenZ);
S_inv = new DenseMatrix64F(dimenZ,dimenZ);
c = new DenseMatrix64F(dimenZ,dimenX);
d = new DenseMatrix64F(dimenX,dimenZ);
K = new DenseMatrix64F(dimenX,dimenZ);
x = new DenseMatrix64F(dimenX,1);
P = new DenseMatrix64F(dimenX,dimenX);
// covariance matrices are symmetric positive semi-definite
solver = LinearSolverFactory.symmPosDef(dimenX);
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x.set(x);
this.P.set(P);
}
@Override
public void predict() {
// x = F x
mult(F,x,a);
x.set(a);
// P = F P F' + Q
mult(F,P,b);
multTransB(b,F, P);
addEquals(P,Q);
}
@Override
public void update(DenseMatrix64F z, DenseMatrix64F R) {
// y = z - H x
mult(H,x,y);
subtract(z, y, y);
// S = H P H' + R
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
// K = PH'S^(-1)
if( !solver.setA(S) ) throw new RuntimeException("Invert failed");
solver.invert(S_inv);
multTransA(H,S_inv,d);
mult(P,d,K);
// x = x + Ky
mult(K,y,a);
addEquals(x,a);
// P = (I-kH)P = P - (KH)P = P-K(HP)
mult(H,P,c);
mult(K,c,b);
subtractEquals(P, b);
}
@Override
public DenseMatrix64F getState() {
return x;
}
@Override
public DenseMatrix64F getCovariance() {
return P;
}
}
</syntaxhighlight>
== Equations Example ==
<syntaxhighlight lang="java">
/**
* Example of how the equation interface can greatly simplify code
*
* @author Peter Abeles
*/
public class KalmanFilterEquation implements KalmanFilter{
// system state estimate
private DenseMatrix64F x;
private DenseMatrix64F P;
private Equation eq;
// Storage for precompiled code for predict and update
Sequence predictX,predictP;
Sequence updateY,updateK,updateX,updateP;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
int dimenX = F.numCols;
x = new DenseMatrix64F(dimenX,1);
P = new DenseMatrix64F(dimenX,dimenX);
eq = new Equation();
// Provide aliases between the symbolic variables and matrices we normally interact with
// The names do not have to be the same.
eq.alias(x,"x",P,"P",Q,"Q",F,"F",H,"H");
// Dummy matrix place holder to avoid compiler errors. Will be replaced later on
eq.alias(new DenseMatrix64F(1,1),"z");
eq.alias(new DenseMatrix64F(1,1),"R");
// Pre-compile so that it doesn't have to compile it each time it's invoked. More cumbersome
// but for small matrices the overhead is significant
predictX = eq.compile("x = F*x");
predictP = eq.compile("P = F*P*F' + Q");
updateY = eq.compile("y = z - H*x");
updateK = eq.compile("K = P*H'*inv( H*P*H' + R )");
updateX = eq.compile("x = x + K*y");
updateP = eq.compile("P = P-K*(H*P)");
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x.set(x);
this.P.set(P);
}
@Override
public void predict() {
predictX.perform();
predictP.perform();
}
@Override
public void update(DenseMatrix64F z, DenseMatrix64F R) {
// Alias will overwrite the reference to the previous matrices with the same name
eq.alias(z,"z"); eq.alias(R,"R");
updateY.perform();
updateK.perform();
updateX.perform();
updateP.perform();
}
@Override
public DenseMatrix64F getState() {
return x;
}
@Override
public DenseMatrix64F getCovariance() {
return P;
}
}
</syntaxhighlight>
3e127c17bb698a9862fcfc877d0b8c4d765a6646
31
30
2015-03-22T04:11:44Z
Peter
1
wikitext
text/x-wiki
Here are three examples that demonstrate how a [http://en.wikipedia.org/wiki/Kalman_filter Kalman filter]] can be created using different API's in EJML. Each API has different advantages and disadvantages. High level interfaces tend to be easier to use, but sacrifice efficiency. The intent of this article is to illustrate this trend empirically. Runtime performance of each approach is shown below. To see how complex and readable each approach is check out the source code below.
<center>
{| class="wikitable"
! API !! Execution Time (ms)
|-
| SimpleMatrix || 1875
|-
| Procedural || 1280
|-
| Equations || 1698
|}
</center>
Code on GitHub:
* [https://github.com/lessthanoptimal/ejml/blob/master/examples/src/org/ejml/example/KalmanFilterSimple.java KalmanFilterSimple]
* [https://github.com/lessthanoptimal/ejml/blob/master/examples/src/org/ejml/example/KalmanFilterEquation.java KalmanFilterEquation]
* [https://github.com/lessthanoptimal/ejml/blob/master/examples/src/org/ejml/example/KalmanFilterSimple.java KalmanFilterSimple]
'''NOTE:''' While the Kalman filter code below is fully functional and will work well in most applications, it might not be the best. Other variants seek to improve stability and/or avoid the matrix inversion. It's worth point out that some people say you should never invert the matrix in a Kalman filter. There are applications, such as target tracking, where matrix inversion of the innovation covariance is helpful as a preprocessing step.
== SimpleMatrix Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter implemented using SimpleMatrix. The code tends to be easier to
* read and write, but the performance is degraded due to excessive creation/destruction of
* memory and the use of more generic algorithms. This also demonstrates how code can be
* seamlessly implemented using both SimpleMatrix and DenseMatrix64F. This allows code
* to be quickly prototyped or to be written either by novices or experts.
*
* @author Peter Abeles
*/
public class KalmanFilterSimple implements KalmanFilter{
// kinematics description
private SimpleMatrix F;
private SimpleMatrix Q;
private SimpleMatrix H;
// sytem state estimate
private SimpleMatrix x;
private SimpleMatrix P;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
this.F = new SimpleMatrix(F);
this.Q = new SimpleMatrix(Q);
this.H = new SimpleMatrix(H);
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x = new SimpleMatrix(x);
this.P = new SimpleMatrix(P);
}
@Override
public void predict() {
// x = F x
x = F.mult(x);
// P = F P F' + Q
P = F.mult(P).mult(F.transpose()).plus(Q);
}
@Override
public void update(DenseMatrix64F _z, DenseMatrix64F _R) {
// a fast way to make the matrices usable by SimpleMatrix
SimpleMatrix z = SimpleMatrix.wrap(_z);
SimpleMatrix R = SimpleMatrix.wrap(_R);
// y = z - H x
SimpleMatrix y = z.minus(H.mult(x));
// S = H P H' + R
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
// K = PH'S^(-1)
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
// x = x + Ky
x = x.plus(K.mult(y));
// P = (I-kH)P = P - KHP
P = P.minus(K.mult(H).mult(P));
}
@Override
public DenseMatrix64F getState() {
return x.getMatrix();
}
@Override
public DenseMatrix64F getCovariance() {
return P.getMatrix();
}
}
</syntaxhighlight>
== Procedural Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter that is implemented using the operations API, which is procedural. Much of the excessive
* memory creation/destruction has been reduced from the KalmanFilterSimple. A specialized solver is
* under to invert the SPD matrix.
*
* @author Peter Abeles
*/
public class KalmanFilterOperations implements KalmanFilter{
// kinematics description
private DenseMatrix64F F;
private DenseMatrix64F Q;
private DenseMatrix64F H;
// system state estimate
private DenseMatrix64F x;
private DenseMatrix64F P;
// these are predeclared for efficiency reasons
private DenseMatrix64F a,b;
private DenseMatrix64F y,S,S_inv,c,d;
private DenseMatrix64F K;
private LinearSolver<DenseMatrix64F> solver;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
this.F = F;
this.Q = Q;
this.H = H;
int dimenX = F.numCols;
int dimenZ = H.numRows;
a = new DenseMatrix64F(dimenX,1);
b = new DenseMatrix64F(dimenX,dimenX);
y = new DenseMatrix64F(dimenZ,1);
S = new DenseMatrix64F(dimenZ,dimenZ);
S_inv = new DenseMatrix64F(dimenZ,dimenZ);
c = new DenseMatrix64F(dimenZ,dimenX);
d = new DenseMatrix64F(dimenX,dimenZ);
K = new DenseMatrix64F(dimenX,dimenZ);
x = new DenseMatrix64F(dimenX,1);
P = new DenseMatrix64F(dimenX,dimenX);
// covariance matrices are symmetric positive semi-definite
solver = LinearSolverFactory.symmPosDef(dimenX);
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x.set(x);
this.P.set(P);
}
@Override
public void predict() {
// x = F x
mult(F,x,a);
x.set(a);
// P = F P F' + Q
mult(F,P,b);
multTransB(b,F, P);
addEquals(P,Q);
}
@Override
public void update(DenseMatrix64F z, DenseMatrix64F R) {
// y = z - H x
mult(H,x,y);
subtract(z, y, y);
// S = H P H' + R
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
// K = PH'S^(-1)
if( !solver.setA(S) ) throw new RuntimeException("Invert failed");
solver.invert(S_inv);
multTransA(H,S_inv,d);
mult(P,d,K);
// x = x + Ky
mult(K,y,a);
addEquals(x,a);
// P = (I-kH)P = P - (KH)P = P-K(HP)
mult(H,P,c);
mult(K,c,b);
subtractEquals(P, b);
}
@Override
public DenseMatrix64F getState() {
return x;
}
@Override
public DenseMatrix64F getCovariance() {
return P;
}
}
</syntaxhighlight>
== Equations Example ==
<syntaxhighlight lang="java">
/**
* Example of how the equation interface can greatly simplify code
*
* @author Peter Abeles
*/
public class KalmanFilterEquation implements KalmanFilter{
// system state estimate
private DenseMatrix64F x;
private DenseMatrix64F P;
private Equation eq;
// Storage for precompiled code for predict and update
Sequence predictX,predictP;
Sequence updateY,updateK,updateX,updateP;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
int dimenX = F.numCols;
x = new DenseMatrix64F(dimenX,1);
P = new DenseMatrix64F(dimenX,dimenX);
eq = new Equation();
// Provide aliases between the symbolic variables and matrices we normally interact with
// The names do not have to be the same.
eq.alias(x,"x",P,"P",Q,"Q",F,"F",H,"H");
// Dummy matrix place holder to avoid compiler errors. Will be replaced later on
eq.alias(new DenseMatrix64F(1,1),"z");
eq.alias(new DenseMatrix64F(1,1),"R");
// Pre-compile so that it doesn't have to compile it each time it's invoked. More cumbersome
// but for small matrices the overhead is significant
predictX = eq.compile("x = F*x");
predictP = eq.compile("P = F*P*F' + Q");
updateY = eq.compile("y = z - H*x");
updateK = eq.compile("K = P*H'*inv( H*P*H' + R )");
updateX = eq.compile("x = x + K*y");
updateP = eq.compile("P = P-K*(H*P)");
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x.set(x);
this.P.set(P);
}
@Override
public void predict() {
predictX.perform();
predictP.perform();
}
@Override
public void update(DenseMatrix64F z, DenseMatrix64F R) {
// Alias will overwrite the reference to the previous matrices with the same name
eq.alias(z,"z"); eq.alias(R,"R");
updateY.perform();
updateK.perform();
updateX.perform();
updateP.perform();
}
@Override
public DenseMatrix64F getState() {
return x;
}
@Override
public DenseMatrix64F getCovariance() {
return P;
}
}
</syntaxhighlight>
0859c4eb8668170bc9f486caf20db083d7e31471
32
31
2015-03-22T04:13:30Z
Peter
1
wikitext
text/x-wiki
Here are three examples that demonstrate how a [http://en.wikipedia.org/wiki/Kalman_filter Kalman filter]] can be created using different API's in EJML. Each API has different advantages and disadvantages. High level interfaces tend to be easier to use, but sacrifice efficiency. The intent of this article is to illustrate this trend empirically. Runtime performance of each approach is shown below. To see how complex and readable each approach is check out the source code below.
<center>
{| class="wikitable"
! API !! Execution Time (ms)
|-
| SimpleMatrix || 1875
|-
| Procedural || 1280
|-
| Equations || 1698
|}
</center>
__TOC__
Code on GitHub:
* [https://github.com/lessthanoptimal/ejml/blob/master/examples/src/org/ejml/example/KalmanFilterSimple.java KalmanFilterSimple]
* [https://github.com/lessthanoptimal/ejml/blob/master/examples/src/org/ejml/example/KalmanFilterEquation.java KalmanFilterEquation]
* [https://github.com/lessthanoptimal/ejml/blob/master/examples/src/org/ejml/example/KalmanFilterSimple.java KalmanFilterSimple]
----
'''NOTE:''' While the Kalman filter code below is fully functional and will work well in most applications, it might not be the best. Other variants seek to improve stability and/or avoid the matrix inversion. It's worth point out that some people say you should never invert the matrix in a Kalman filter. There are applications, such as target tracking, where matrix inversion of the innovation covariance is helpful as a preprocessing step.
== SimpleMatrix Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter implemented using SimpleMatrix. The code tends to be easier to
* read and write, but the performance is degraded due to excessive creation/destruction of
* memory and the use of more generic algorithms. This also demonstrates how code can be
* seamlessly implemented using both SimpleMatrix and DenseMatrix64F. This allows code
* to be quickly prototyped or to be written either by novices or experts.
*
* @author Peter Abeles
*/
public class KalmanFilterSimple implements KalmanFilter{
// kinematics description
private SimpleMatrix F;
private SimpleMatrix Q;
private SimpleMatrix H;
// sytem state estimate
private SimpleMatrix x;
private SimpleMatrix P;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
this.F = new SimpleMatrix(F);
this.Q = new SimpleMatrix(Q);
this.H = new SimpleMatrix(H);
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x = new SimpleMatrix(x);
this.P = new SimpleMatrix(P);
}
@Override
public void predict() {
// x = F x
x = F.mult(x);
// P = F P F' + Q
P = F.mult(P).mult(F.transpose()).plus(Q);
}
@Override
public void update(DenseMatrix64F _z, DenseMatrix64F _R) {
// a fast way to make the matrices usable by SimpleMatrix
SimpleMatrix z = SimpleMatrix.wrap(_z);
SimpleMatrix R = SimpleMatrix.wrap(_R);
// y = z - H x
SimpleMatrix y = z.minus(H.mult(x));
// S = H P H' + R
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
// K = PH'S^(-1)
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
// x = x + Ky
x = x.plus(K.mult(y));
// P = (I-kH)P = P - KHP
P = P.minus(K.mult(H).mult(P));
}
@Override
public DenseMatrix64F getState() {
return x.getMatrix();
}
@Override
public DenseMatrix64F getCovariance() {
return P.getMatrix();
}
}
</syntaxhighlight>
== Procedural Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter that is implemented using the operations API, which is procedural. Much of the excessive
* memory creation/destruction has been reduced from the KalmanFilterSimple. A specialized solver is
* under to invert the SPD matrix.
*
* @author Peter Abeles
*/
public class KalmanFilterOperations implements KalmanFilter{
// kinematics description
private DenseMatrix64F F;
private DenseMatrix64F Q;
private DenseMatrix64F H;
// system state estimate
private DenseMatrix64F x;
private DenseMatrix64F P;
// these are predeclared for efficiency reasons
private DenseMatrix64F a,b;
private DenseMatrix64F y,S,S_inv,c,d;
private DenseMatrix64F K;
private LinearSolver<DenseMatrix64F> solver;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
this.F = F;
this.Q = Q;
this.H = H;
int dimenX = F.numCols;
int dimenZ = H.numRows;
a = new DenseMatrix64F(dimenX,1);
b = new DenseMatrix64F(dimenX,dimenX);
y = new DenseMatrix64F(dimenZ,1);
S = new DenseMatrix64F(dimenZ,dimenZ);
S_inv = new DenseMatrix64F(dimenZ,dimenZ);
c = new DenseMatrix64F(dimenZ,dimenX);
d = new DenseMatrix64F(dimenX,dimenZ);
K = new DenseMatrix64F(dimenX,dimenZ);
x = new DenseMatrix64F(dimenX,1);
P = new DenseMatrix64F(dimenX,dimenX);
// covariance matrices are symmetric positive semi-definite
solver = LinearSolverFactory.symmPosDef(dimenX);
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x.set(x);
this.P.set(P);
}
@Override
public void predict() {
// x = F x
mult(F,x,a);
x.set(a);
// P = F P F' + Q
mult(F,P,b);
multTransB(b,F, P);
addEquals(P,Q);
}
@Override
public void update(DenseMatrix64F z, DenseMatrix64F R) {
// y = z - H x
mult(H,x,y);
subtract(z, y, y);
// S = H P H' + R
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
// K = PH'S^(-1)
if( !solver.setA(S) ) throw new RuntimeException("Invert failed");
solver.invert(S_inv);
multTransA(H,S_inv,d);
mult(P,d,K);
// x = x + Ky
mult(K,y,a);
addEquals(x,a);
// P = (I-kH)P = P - (KH)P = P-K(HP)
mult(H,P,c);
mult(K,c,b);
subtractEquals(P, b);
}
@Override
public DenseMatrix64F getState() {
return x;
}
@Override
public DenseMatrix64F getCovariance() {
return P;
}
}
</syntaxhighlight>
== Equations Example ==
<syntaxhighlight lang="java">
/**
* Example of how the equation interface can greatly simplify code
*
* @author Peter Abeles
*/
public class KalmanFilterEquation implements KalmanFilter{
// system state estimate
private DenseMatrix64F x;
private DenseMatrix64F P;
private Equation eq;
// Storage for precompiled code for predict and update
Sequence predictX,predictP;
Sequence updateY,updateK,updateX,updateP;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
int dimenX = F.numCols;
x = new DenseMatrix64F(dimenX,1);
P = new DenseMatrix64F(dimenX,dimenX);
eq = new Equation();
// Provide aliases between the symbolic variables and matrices we normally interact with
// The names do not have to be the same.
eq.alias(x,"x",P,"P",Q,"Q",F,"F",H,"H");
// Dummy matrix place holder to avoid compiler errors. Will be replaced later on
eq.alias(new DenseMatrix64F(1,1),"z");
eq.alias(new DenseMatrix64F(1,1),"R");
// Pre-compile so that it doesn't have to compile it each time it's invoked. More cumbersome
// but for small matrices the overhead is significant
predictX = eq.compile("x = F*x");
predictP = eq.compile("P = F*P*F' + Q");
updateY = eq.compile("y = z - H*x");
updateK = eq.compile("K = P*H'*inv( H*P*H' + R )");
updateX = eq.compile("x = x + K*y");
updateP = eq.compile("P = P-K*(H*P)");
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x.set(x);
this.P.set(P);
}
@Override
public void predict() {
predictX.perform();
predictP.perform();
}
@Override
public void update(DenseMatrix64F z, DenseMatrix64F R) {
// Alias will overwrite the reference to the previous matrices with the same name
eq.alias(z,"z"); eq.alias(R,"R");
updateY.perform();
updateK.perform();
updateX.perform();
updateP.perform();
}
@Override
public DenseMatrix64F getState() {
return x;
}
@Override
public DenseMatrix64F getCovariance() {
return P;
}
}
</syntaxhighlight>
e69b7c9546e06423e5760d3c3bbe95020a85482e
46
32
2015-03-22T05:47:59Z
Peter
1
wikitext
text/x-wiki
Here are three examples that demonstrate how a [http://en.wikipedia.org/wiki/Kalman_filter Kalman filter]] can be created using different API's in EJML. Each API has different advantages and disadvantages. High level interfaces tend to be easier to use, but sacrifice efficiency. The intent of this article is to illustrate this trend empirically. Runtime performance of each approach is shown below. To see how complex and readable each approach is check out the source code below.
<center>
{| class="wikitable"
! API !! Execution Time (ms)
|-
| SimpleMatrix || 1875
|-
| Procedural || 1280
|-
| Equations || 1698
|}
</center>
__TOC__
Code on GitHub:
* [https://github.com/lessthanoptimal/ejml/blob/master/examples/src/org/ejml/example/KalmanFilterSimple.java KalmanFilterSimple]
* [https://github.com/lessthanoptimal/ejml/blob/master/examples/src/org/ejml/example/KalmanFilterOperations.java KalmanFilterProcedural]
* [https://github.com/lessthanoptimal/ejml/blob/master/examples/src/org/ejml/example/KalmanFilterEquation.java KalmanFilterEquation]
----
'''NOTE:''' While the Kalman filter code below is fully functional and will work well in most applications, it might not be the best. Other variants seek to improve stability and/or avoid the matrix inversion. It's worth point out that some people say you should never invert the matrix in a Kalman filter. There are applications, such as target tracking, where matrix inversion of the innovation covariance is helpful as a preprocessing step.
== SimpleMatrix Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter implemented using SimpleMatrix. The code tends to be easier to
* read and write, but the performance is degraded due to excessive creation/destruction of
* memory and the use of more generic algorithms. This also demonstrates how code can be
* seamlessly implemented using both SimpleMatrix and DenseMatrix64F. This allows code
* to be quickly prototyped or to be written either by novices or experts.
*
* @author Peter Abeles
*/
public class KalmanFilterSimple implements KalmanFilter{
// kinematics description
private SimpleMatrix F;
private SimpleMatrix Q;
private SimpleMatrix H;
// sytem state estimate
private SimpleMatrix x;
private SimpleMatrix P;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
this.F = new SimpleMatrix(F);
this.Q = new SimpleMatrix(Q);
this.H = new SimpleMatrix(H);
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x = new SimpleMatrix(x);
this.P = new SimpleMatrix(P);
}
@Override
public void predict() {
// x = F x
x = F.mult(x);
// P = F P F' + Q
P = F.mult(P).mult(F.transpose()).plus(Q);
}
@Override
public void update(DenseMatrix64F _z, DenseMatrix64F _R) {
// a fast way to make the matrices usable by SimpleMatrix
SimpleMatrix z = SimpleMatrix.wrap(_z);
SimpleMatrix R = SimpleMatrix.wrap(_R);
// y = z - H x
SimpleMatrix y = z.minus(H.mult(x));
// S = H P H' + R
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
// K = PH'S^(-1)
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
// x = x + Ky
x = x.plus(K.mult(y));
// P = (I-kH)P = P - KHP
P = P.minus(K.mult(H).mult(P));
}
@Override
public DenseMatrix64F getState() {
return x.getMatrix();
}
@Override
public DenseMatrix64F getCovariance() {
return P.getMatrix();
}
}
</syntaxhighlight>
== Procedural Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter that is implemented using the operations API, which is procedural. Much of the excessive
* memory creation/destruction has been reduced from the KalmanFilterSimple. A specialized solver is
* under to invert the SPD matrix.
*
* @author Peter Abeles
*/
public class KalmanFilterOperations implements KalmanFilter{
// kinematics description
private DenseMatrix64F F;
private DenseMatrix64F Q;
private DenseMatrix64F H;
// system state estimate
private DenseMatrix64F x;
private DenseMatrix64F P;
// these are predeclared for efficiency reasons
private DenseMatrix64F a,b;
private DenseMatrix64F y,S,S_inv,c,d;
private DenseMatrix64F K;
private LinearSolver<DenseMatrix64F> solver;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
this.F = F;
this.Q = Q;
this.H = H;
int dimenX = F.numCols;
int dimenZ = H.numRows;
a = new DenseMatrix64F(dimenX,1);
b = new DenseMatrix64F(dimenX,dimenX);
y = new DenseMatrix64F(dimenZ,1);
S = new DenseMatrix64F(dimenZ,dimenZ);
S_inv = new DenseMatrix64F(dimenZ,dimenZ);
c = new DenseMatrix64F(dimenZ,dimenX);
d = new DenseMatrix64F(dimenX,dimenZ);
K = new DenseMatrix64F(dimenX,dimenZ);
x = new DenseMatrix64F(dimenX,1);
P = new DenseMatrix64F(dimenX,dimenX);
// covariance matrices are symmetric positive semi-definite
solver = LinearSolverFactory.symmPosDef(dimenX);
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x.set(x);
this.P.set(P);
}
@Override
public void predict() {
// x = F x
mult(F,x,a);
x.set(a);
// P = F P F' + Q
mult(F,P,b);
multTransB(b,F, P);
addEquals(P,Q);
}
@Override
public void update(DenseMatrix64F z, DenseMatrix64F R) {
// y = z - H x
mult(H,x,y);
subtract(z, y, y);
// S = H P H' + R
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
// K = PH'S^(-1)
if( !solver.setA(S) ) throw new RuntimeException("Invert failed");
solver.invert(S_inv);
multTransA(H,S_inv,d);
mult(P,d,K);
// x = x + Ky
mult(K,y,a);
addEquals(x,a);
// P = (I-kH)P = P - (KH)P = P-K(HP)
mult(H,P,c);
mult(K,c,b);
subtractEquals(P, b);
}
@Override
public DenseMatrix64F getState() {
return x;
}
@Override
public DenseMatrix64F getCovariance() {
return P;
}
}
</syntaxhighlight>
== Equations Example ==
<syntaxhighlight lang="java">
/**
* Example of how the equation interface can greatly simplify code
*
* @author Peter Abeles
*/
public class KalmanFilterEquation implements KalmanFilter{
// system state estimate
private DenseMatrix64F x;
private DenseMatrix64F P;
private Equation eq;
// Storage for precompiled code for predict and update
Sequence predictX,predictP;
Sequence updateY,updateK,updateX,updateP;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
int dimenX = F.numCols;
x = new DenseMatrix64F(dimenX,1);
P = new DenseMatrix64F(dimenX,dimenX);
eq = new Equation();
// Provide aliases between the symbolic variables and matrices we normally interact with
// The names do not have to be the same.
eq.alias(x,"x",P,"P",Q,"Q",F,"F",H,"H");
// Dummy matrix place holder to avoid compiler errors. Will be replaced later on
eq.alias(new DenseMatrix64F(1,1),"z");
eq.alias(new DenseMatrix64F(1,1),"R");
// Pre-compile so that it doesn't have to compile it each time it's invoked. More cumbersome
// but for small matrices the overhead is significant
predictX = eq.compile("x = F*x");
predictP = eq.compile("P = F*P*F' + Q");
updateY = eq.compile("y = z - H*x");
updateK = eq.compile("K = P*H'*inv( H*P*H' + R )");
updateX = eq.compile("x = x + K*y");
updateP = eq.compile("P = P-K*(H*P)");
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x.set(x);
this.P.set(P);
}
@Override
public void predict() {
predictX.perform();
predictP.perform();
}
@Override
public void update(DenseMatrix64F z, DenseMatrix64F R) {
// Alias will overwrite the reference to the previous matrices with the same name
eq.alias(z,"z"); eq.alias(R,"R");
updateY.perform();
updateK.perform();
updateX.perform();
updateP.perform();
}
@Override
public DenseMatrix64F getState() {
return x;
}
@Override
public DenseMatrix64F getCovariance() {
return P;
}
}
</syntaxhighlight>
9adc836fbe0688094ce01a73036a418ae2a94855
MediaWiki:Sidebar
8
11
33
2015-03-22T04:19:15Z
Peter
1
Created page with " * navigation ** mainpage|mainpage-description ** manual|manual ** download|download ** recentchanges-url|recentchanges ** randompage-url|randompage ** helppage|help * SEARCH..."
wikitext
text/x-wiki
* navigation
** mainpage|mainpage-description
** manual|manual
** download|download
** recentchanges-url|recentchanges
** randompage-url|randompage
** helppage|help
* SEARCH
* TOOLBOX
* LANGUAGES
4d85af992cbb3f14e1c403c3a0fa0933b6e9f867
34
33
2015-03-22T04:19:55Z
Peter
1
wikitext
text/x-wiki
* navigation
** mainpage|mainpage-description
** Manual|Manual
** Download|Download
** recentchanges-url|recentchanges
** randompage-url|randompage
** helppage|help
* SEARCH
* TOOLBOX
* LANGUAGES
5f1b0fe118058b7662cdf1a320c0dfa4dc0c1b1d
Example Levenberg-Marquardt
0
12
37
2015-03-22T04:29:54Z
Peter
1
Created page with "Levenberg-Marquardt is a popular non-linear optimization algorithm. This example demonstrate how a basic implementation of Levenberg-Marquardt can be created using EJML's Pr..."
wikitext
text/x-wiki
Levenberg-Marquardt is a popular non-linear optimization algorithm. This example demonstrate how a basic implementation of Levenberg-Marquardt can be created using EJML's [[Procedural|procedural]] interface. Unnecessary allocation of new memory is avoided by reshaping matrices. When a matrix is reshaped its width and height is changed but new memory is not declared unless the new shape requires more memory than is available.
The algorithm is provided a function, set of inputs, set of outputs, and an initial estimate of the parameters (this often works with all zeros). It finds the parameters that minimize the difference between the computed output and the observed output. A numerical Jacobian is used to estimate the function's gradient.
'''Note:''' This is a simple straight forward implementation of Levenberg-Marquardt and is not as robust as Minpack's implementation. If you are looking for a robust non-linear least-squares minimization library in Java check out [http://ddogleg.org DDogleg].
Github Code:
[https://github.com/lessthanoptimal/ejml/blob/master/examples/src/org/ejml/example/LevenbergMarquardt.java LevenbergMarquardt]
== Example Code ==
<syntaxhighlight lang="java">
/**
* <p>
* This is a straight forward implementation of the Levenberg-Marquardt (LM) algorithm. LM is used to minimize
* non-linear cost functions:<br>
* <br>
* S(P) = Sum{ i=1:m , [y<sub>i</sub> - f(x<sub>i</sub>,P)]<sup>2</sup>}<br>
* <br>
* where P is the set of parameters being optimized.
* </p>
*
* <p>
* In each iteration the parameters are updated using the following equations:<br>
* <br>
* P<sub>i+1</sub> = (H + λ I)<sup>-1</sup> d <br>
* d = (1/N) Sum{ i=1..N , (f(x<sub>i</sub>;P<sub>i</sub>) - y<sub>i</sub>) * jacobian(:,i) } <br>
* H = (1/N) Sum{ i=1..N , jacobian(:,i) * jacobian(:,i)<sup>T</sup> }
* </p>
* <p>
* Whenever possible the allocation of new memory is avoided. This is accomplished by reshaping matrices.
* A matrix that is reshaped won't grow unless the new shape requires more memory than it has available.
* </p>
* @author Peter Abeles
*/
public class LevenbergMarquardt {
// how much the numerical jacobian calculation perturbs the parameters by.
// In better implementation there are better ways to compute this delta. See Numerical Recipes.
private final static double DELTA = 1e-8;
private double initialLambda;
// the function that is optimized
private Function func;
// the optimized parameters and associated costs
private DenseMatrix64F param;
private double initialCost;
private double finalCost;
// used by matrix operations
private DenseMatrix64F d;
private DenseMatrix64F H;
private DenseMatrix64F negDelta;
private DenseMatrix64F tempParam;
private DenseMatrix64F A;
// variables used by the numerical jacobian algorithm
private DenseMatrix64F temp0;
private DenseMatrix64F temp1;
// used when computing d and H variables
private DenseMatrix64F tempDH;
// Where the numerical Jacobian is stored.
private DenseMatrix64F jacobian;
/**
* Creates a new instance that uses the provided cost function.
*
* @param funcCost Cost function that is being optimized.
*/
public LevenbergMarquardt( Function funcCost )
{
this.initialLambda = 1;
// declare data to some initial small size. It will grow later on as needed.
int maxElements = 1;
int numParam = 1;
this.temp0 = new DenseMatrix64F(maxElements,1);
this.temp1 = new DenseMatrix64F(maxElements,1);
this.tempDH = new DenseMatrix64F(maxElements,1);
this.jacobian = new DenseMatrix64F(numParam,maxElements);
this.func = funcCost;
this.param = new DenseMatrix64F(numParam,1);
this.d = new DenseMatrix64F(numParam,1);
this.H = new DenseMatrix64F(numParam,numParam);
this.negDelta = new DenseMatrix64F(numParam,1);
this.tempParam = new DenseMatrix64F(numParam,1);
this.A = new DenseMatrix64F(numParam,numParam);
}
public double getInitialCost() {
return initialCost;
}
public double getFinalCost() {
return finalCost;
}
public DenseMatrix64F getParameters() {
return param;
}
/**
* Finds the best fit parameters.
*
* @param initParam The initial set of parameters for the function.
* @param X The inputs to the function.
* @param Y The "observed" output of the function
* @return true if it succeeded and false if it did not.
*/
public boolean optimize( DenseMatrix64F initParam ,
DenseMatrix64F X ,
DenseMatrix64F Y )
{
configure(initParam,X,Y);
// save the cost of the initial parameters so that it knows if it improves or not
initialCost = cost(param,X,Y);
// iterate until the difference between the costs is insignificant
// or it iterates too many times
if( !adjustParam(X, Y, initialCost) ) {
finalCost = Double.NaN;
return false;
}
return true;
}
/**
* Iterate until the difference between the costs is insignificant
* or it iterates too many times
*/
private boolean adjustParam(DenseMatrix64F X, DenseMatrix64F Y,
double prevCost) {
// lambda adjusts how big of a step it takes
double lambda = initialLambda;
// the difference between the current and previous cost
double difference = 1000;
for( int iter = 0; iter < 20 || difference < 1e-6 ; iter++ ) {
// compute some variables based on the gradient
computeDandH(param,X,Y);
// try various step sizes and see if any of them improve the
// results over what has already been done
boolean foundBetter = false;
for( int i = 0; i < 5; i++ ) {
computeA(A,H,lambda);
if( !solve(A,d,negDelta) ) {
return false;
}
// compute the candidate parameters
subtract(param, negDelta, tempParam);
double cost = cost(tempParam,X,Y);
if( cost < prevCost ) {
// the candidate parameters produced better results so use it
foundBetter = true;
param.set(tempParam);
difference = prevCost - cost;
prevCost = cost;
lambda /= 10.0;
} else {
lambda *= 10.0;
}
}
// it reached a point where it can't improve so exit
if( !foundBetter )
break;
}
finalCost = prevCost;
return true;
}
/**
* Performs sanity checks on the input data and reshapes internal matrices. By reshaping
* a matrix it will only declare new memory when needed.
*/
protected void configure( DenseMatrix64F initParam , DenseMatrix64F X , DenseMatrix64F Y )
{
if( Y.getNumRows() != X.getNumRows() ) {
throw new IllegalArgumentException("Different vector lengths");
} else if( Y.getNumCols() != 1 || X.getNumCols() != 1 ) {
throw new IllegalArgumentException("Inputs must be a column vector");
}
int numParam = initParam.getNumElements();
int numPoints = Y.getNumRows();
if( param.getNumElements() != initParam.getNumElements() ) {
// reshaping a matrix means that new memory is only declared when needed
this.param.reshape(numParam,1, false);
this.d.reshape(numParam,1, false);
this.H.reshape(numParam,numParam, false);
this.negDelta.reshape(numParam,1, false);
this.tempParam.reshape(numParam,1, false);
this.A.reshape(numParam,numParam, false);
}
param.set(initParam);
// reshaping a matrix means that new memory is only declared when needed
temp0.reshape(numPoints,1, false);
temp1.reshape(numPoints,1, false);
tempDH.reshape(numPoints,1, false);
jacobian.reshape(numParam,numPoints, false);
}
/**
* Computes the d and H parameters. Where d is the average error gradient and
* H is an approximation of the hessian.
*/
private void computeDandH( DenseMatrix64F param , DenseMatrix64F x , DenseMatrix64F y )
{
func.compute(param,x, tempDH);
subtractEquals(tempDH, y);
computeNumericalJacobian(param,x,jacobian);
int numParam = param.getNumElements();
int length = x.getNumElements();
// d = average{ (f(x_i;p) - y_i) * jacobian(:,i) }
for( int i = 0; i < numParam; i++ ) {
double total = 0;
for( int j = 0; j < length; j++ ) {
total += tempDH.get(j,0)*jacobian.get(i,j);
}
d.set(i,0,total/length);
}
// compute the approximation of the hessian
multTransB(jacobian,jacobian,H);
scale(1.0/length,H);
}
/**
* A = H + lambda*I <br>
* <br>
* where I is an identity matrix.
*/
private void computeA( DenseMatrix64F A , DenseMatrix64F H , double lambda )
{
final int numParam = param.getNumElements();
A.set(H);
for( int i = 0; i < numParam; i++ ) {
A.set(i,i, A.get(i,i) + lambda);
}
}
/**
* Computes the "cost" for the parameters given.
*
* cost = (1/N) Sum (f(x;p) - y)^2
*/
private double cost( DenseMatrix64F param , DenseMatrix64F X , DenseMatrix64F Y)
{
func.compute(param,X, temp0);
double error = diffNormF(temp0,Y);
return error*error / (double)X.numRows;
}
/**
* Computes a simple numerical Jacobian.
*
* @param param The set of parameters that the Jacobian is to be computed at.
* @param pt The point around which the Jacobian is to be computed.
* @param deriv Where the jacobian will be stored
*/
protected void computeNumericalJacobian( DenseMatrix64F param ,
DenseMatrix64F pt ,
DenseMatrix64F deriv )
{
double invDelta = 1.0/DELTA;
func.compute(param,pt, temp0);
// compute the jacobian by perturbing the parameters slightly
// then seeing how it effects the results.
for( int i = 0; i < param.numRows; i++ ) {
param.data[i] += DELTA;
func.compute(param,pt, temp1);
// compute the difference between the two parameters and divide by the delta
add(invDelta,temp1,-invDelta,temp0,temp1);
// copy the results into the jacobian matrix
System.arraycopy(temp1.data,0,deriv.data,i*pt.numRows,pt.numRows);
param.data[i] -= DELTA;
}
}
/**
* The function that is being optimized.
*/
public interface Function {
/**
* Computes the output for each value in matrix x given the set of parameters.
*
* @param param The parameter for the function.
* @param x the input points.
* @param y the resulting output.
*/
public void compute( DenseMatrix64F param , DenseMatrix64F x , DenseMatrix64F y );
}
}
</syntaxhighlight>
695ee978296e1582a2a51de3475afc0769174df8
50
37
2015-03-22T06:07:56Z
Peter
1
wikitext
text/x-wiki
Writing succinct and readable linear algebra code in Java, using any library, is problematic. Originally EJML just offered two API's for performing linear algebra. A procedural API which provided complete control over memory and which algorithms were used, but was verbose and has a sharper learning curve. Alternatively you could use an object oriented API (SimpleMatrix) but lose control over memory and it has a limited set of operators. Neither of these API's produces code which is all that similar to how equations are written mathematically.
Languages, such as Matlab, are specifically designed for processing matrices and are much closer to mathematical notation. C++ offers the ability to overload operators allowing for more natural code, see [http://eigen.tuxfamily.org Eigen]. To overcome this problem EJML now provides the _Equation_ API, which allows a Matlab/Octave like notation to be used.
This is achieved by parsing text strings with equations and converting it into a set of executable instructions, see the usage example below:
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
It is easy to see that the equations code is the compact and readable. While the syntax is heavily inspired by Matlab and its kin, it does not attempt to replicate their functionality. It is also not a replacement for SimpleMatrix or the procedural API. There are situations where those other interfaces are easier to use and most programs would need to use a mix.
Equations is designed to have minimal overhead. It runs almost as fast as the procedural API and can be used such that all memory is predeclared.
----
__TOC__
= Quick Start =
The syntax used in Equation is very similar to Matlab and other computer algebra systems (CAS). It is assumed the reader is already familiar with these systems and can quickly pick up the syntax through these examples.
Let's start with a complete simple example then explain what's going on line by line.
<pre>
01: public void updateP( DenseMatrix64F P , DenseMatrix64F F , DenseMatrix64F Q ) {
02: Equation eq = new Equation();
03: eq.alias(P,"P",F,"F",Q,"Q");
04: eq.process("S = F*P*F'");
05: eq.process("P = S + Q");
06: }
</pre>
'''Line 02:''' Declare the Equation class.<br>
'''Line 03:''' Create aliases for each variable. This allowed Equation to reference and manipulate those classes.<br>
'''Line 04:''' Process() is called and passed in a text string with an equation in it. The variable 'S' is lazily created and set to the result of F*P*F'.<br>
'''Line 05:''' Process() is called again and P is set to the result of adding S and Q together. Because P is aliased to the input matrix P that matrix is changed.
Three types of variables are supported, matrix, double, and integer. Results can be stored in each type and all can be aliased. The example below uses all 3 data types and to compute the likelihood of "x" from a multivariable normal distribution defined by matrices 'mu' and 'P'.
<syntaxhighlight lang="java">
eq.alias(x.numRows,"k",P,"P",x,"x",mu,"mu");
eq.process("p = (2*pi)^(-k/2)/sqrt(det(P))*exp(-0.5*(x-mu)'*inv(P)*(x-mu))");
</syntaxhighlight>
The end result 'p' will be a double. There was no need to alias 'pi' since it's a built in constant. Since 'p' is lazily defined how do you access the result?
<syntaxhighlight lang="java">
double p = eq.lookupDouble("p");
</syntaxhighlight>
For a matrix you could use eq.lookupMatrix() and eq.lookupInteger() for integers. If you don't know the variable's type then eq.lookupVariable() is what you need.
It is also possible to define a matrix inline:
<syntaxhighlight lang="java">
eq.process("P = [10 0 0;0 10 0;0 0 10]");
</syntaxhighlight>
Will assign P to a 3x3 matrix with 10's all along it's diagonal. Other matrices can also be included inside:
<syntaxhighlight lang="java">
eq.process("P = [A ; B]");
</syntaxhighlight>
will concatenate A and B horizontally.
Submatrices are also supported for assignment and reference.
<syntaxhighlight lang="java">
eq.process("P(2:5,0:3) = 10*A(1:4,10:13)");
</syntaxhighlight>
P(2:5,0:3) references the sub-matrix inside of P from rows 2 to 5 (inclusive) and columns 0 to 3 (inclusive).
This concludes the quick start tutorial. The remaining sections will go into more detail on each of the subjects touched upon above.
= The Compiler =
The current compiler is very basic and performs very literal translations of equations into code. For example, "A = 2.5*B*C'" could be executed with a single call to CommonOps.multTransB(). Instead it will transpose C, save the result, then scale B by 2.5, save the result, multiply the results together, save that, and finally copy the result into A. In the future the compiler will become smart enough to recognize such patterns.
Compiling the text string contains requires a bit of overhead but once compiled it can be run with very fast. When dealing with larger matrices the overhead involved is insignificant, but for smaller ones it can have a noticeable impact. This is why the ability to precompile an equation is provided.
Original:
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
Precompiled:
<syntaxhighlight lang="java">
// precompile the equation
Sequence s = eq.compile("K = P*H'*inv( H*P*H' + R )");
// execute the results with out needing to recompile
s.perform();
</syntaxhighlight>
Both are equivalent, but if an equation is invoked multiple times the precompiled version can have a noticable improvement in performance. Using precompiled sequences also means that means that internal arrays are only declared once and allows the user to control when memory is created/destroyed.
To make it clear, precompiling is only recommended when dealing with smaller matrices or when tighter control over memory is required.
When an equation is precompiled you can still change the alias for a variable.
<syntaxhighlight lang="java">
eq.alias(0,"sum",0,"i");
Sequence s = eq.compile("sum = sum + i");
for( int i = 0; i < 10; i++ ) {
eq.alias(i,"i");
s.perform();
}
</syntaxhighlight>
This will sum up the numbers from 0 to 9.
== Debugging ==
There will be times when you pass in an equation and it throws some weird exception or just doesn't do what you expected. To see the tokens and sequence of operations set the second parameter in compile() or peform() to true.
For example:
<syntaxhighlight lang="java">
eq.process("y = z - H*x",true);
</syntaxhighlight>
When application is run it will print out
<syntaxhighlight lang="java">
Parsed tokens:
------------
VarMATRIX
ASSIGN
VarMATRIX
MINUS
VarMATRIX
TIMES
VarMATRIX
Operations:
------------
multiply-mm
subtract-mm
copy-mm
</syntaxhighlight>
= Alias =
To manipulate matrices in equations they need to be aliased. Both DenseMatrix64F and SimpleMatrix can be aliased. A copy of scalar numbers can also be aliased. When a variable is aliased a reference to the data is saved and a name associated with it.
<syntaxhighlight lang="java">
DenseMatrix64F x = new DenseMatrix64F(6,1);
eq.alias(x,"x");
</syntaxhighlight>
Multiple variables can be aliased at the same time too
<syntaxhighlight lang="java">
eq.alias(x,"x",P,"P",h,"Happy");
</syntaxhighlight>
As is shown above the string name for a variable does not have to be the same as Java name of the variable. Here is an example where an integer and double is aliased.
<syntaxhighlight lang="java">
int a = 6;
eq.alias(2.3,"distance",a,"a");
</syntaxhighlight>
After a variable has been aliased you can alias the same name again to change it. Here is an example of just that:
<syntaxhighlight lang="java">
for( int i = 0; i < 10; i++ ) {
eq.alias(i,"i");
// do stuff with i
}
</syntaxhighlight>
If after benchmarking your code and discover that the alias operation is slowing it down (a hashmap lookup is done internally) then you should consider the following faster, but uglier, alternative.
<syntaxhighlight lang="java">
VariableInteger i = eq.lookupVariable("i");
for( i.value = 0; i.value < 10; i.value++ ) {
// do stuff with i
}
</syntaxhighlight>
= Submatrices =
Sub-matrices can be read from and written to. It's easy to reference a sub-matrix inside of any matrix. A few examples are below.
<syntaxhighlight lang="java">
A(1:4,0:5)
</syntaxhighlight>
Here rows 1 to 4 (inclusive) and columns 0 to 5 (inclusive) compose the sub-matrix of A. The notation "a:b" indicates an integer set from 'a' to 'b', where 'a' and 'b' must be integers themselves. To specify every row or column do ":" or all rows and columns past a certain 'a' can be referenced with "a:". Finally, you can reference just a number by typeing it, e.g. "a".
<syntaxhighlight lang="java">
A(3:,3) <-- Rows from 3 to the last row and just column 3
A(:,:) <-- Every element in A
A(1,2) <-- The element in A at row=1,col=2
</syntaxhighlight>
The last example is a special case in that A(1,2) will return a double and not 1x1 matrix. Consider the following:
<syntaxhighlight lang="java">
A(0:2,0:2) = C/B(1,2)
</syntaxhighlight>
The results of dividing the elements of matrix C by the value of B(1,2) is assigned to the submatrix in A.
A named variable can also be used to reference elements as long as it's an integer.
<syntaxhighlight lang="java">
a = A(i,j)
</syntaxhighlight>
= Inline Matrix =
Matrices can be created inline and are defined inside of brackets. The matrix is specified in a row-major format, where a space separates elements in a row and a semi-colon indicates the end of a row.
<syntaxhighlight lang="java">
[5 0 0;0 4.0 0.0 ; 0 0 1]
</syntaxhighlight>
Defines a 3x3 matrix with 5,4,1 for it's diagonal elements. Visually this looks like:
<syntaxhighlight lang="java">
[ 5 0 0 ]
[ 0 4 0 ]
[ 0 0 1 ]
</syntaxhighlight>
An inline matrix can be used to concatenate other matrices together.
<syntaxhighlight lang="java">
[ A ; B ; C ]
</syntaxhighlight>
Will concatenate matrices A, B, and C along their rows. They must have the same number of columns. As you might guess, to concatenate along columns you would
<syntaxhighlight lang="java">
[ A B C ]
</syntaxhighlight>
and each matrix must have the same number of rows. Inner matrices are also allowed
<syntaxhighlight lang="java">
[ [1 2;2 3] [4;5] ; A ]
</syntaxhighlight>
which will result in
<syntaxhighlight lang="java">
[ 1 2 4 ]
[ 2 3 5 ]
[ A ]
</syntaxhighlight>
= Built in Functions and Variables =
'''Constants'''
<pre>
pi = Math.PI
e = Math.E
</pre>
'''Functions'''
<pre>
eye(N) Create an identity matrix which is N by N.
eye(A) Create an identity matrix which is A.numRows by A.numCols
normF(A) Frobenius normal of the matrix.
det(A) Determinant of the matrix
inv(A) Inverse of a matrix
pinv(A) Pseudo-inverse of a matrix
rref(A) Reduced row echelon form of A
trace(A) Trace of the matrix
zeros(r,c) Matrix full of zeros with r rows and c columns.
ones(r,c) Matrix full of ones with r rows and c columns.
diag(A) If a vector then returns a square matrix with diagonal elements filled with vector
diag(A) If a matrix then it returns the diagonal elements as a column vector
dot(A,B) Returns the dot product of two vectors as a double. Does not work on general matrices.
solve(A,B) Returns the solution X from A*X = B.
kron(A,B) Kronecker product
abs(A) Absolute value of A.
max(A) Element with the largest value in A.
min(A) Element with the smallest value in A.
pow(a,b) Scalar power of a to b. Can also be invoked with "a^b".
sin(a) Math.sin(a) for scalars only
cos(a) Math.cos(a) for scalars only
atan(a) Math.atan(a) for scalars only
atan2(a,b) Math.atan2(a,b) for scalars only
exp(a) Math.exp(a) for scalars and element-wise matrices
log(a) Math.log(a) for scalars and element-wise matrices
</syntaxhighlight>
'''Symbols'''
<syntaxhighlight lang="java">
'*' multiplication (Matrix-Matrix, Scalar-Matrix, Scalar-Scalar)
'+' addition (Matrix-Matrix, Scalar-Matrix, Scalar-Scalar)
'-' subtraction (Matrix-Matrix, Scalar-Matrix, Scalar-Scalar)
'/' divide (Matrix-Scalar, Scalar-Scalar)
'/' matrix solve "x=b/A" is equivalent to x=solve(A,b) (Matrix-Matrix)
'^' Scalar power. a^b is a to the power of b.
'\' left-divide. Same as divide but reversed. e.g. x=A\b is x=solve(A,b)
'.*' element-wise multiplication (Matrix-Matrix)
'./' element-wise division (Matrix-Matrix)
'.^' element-wise power. (scalar-scalar) (matrix-matrix) (scalar-matrix) (matrix-scalar)
'^' Scalar power. a^b is a to the power of b.
''' matrix transpose
'=' assignment by value (Matrix-Matrix, Scalar-Scalar)
</pre>
= User Defined Functions =
It's easy to add your own custom functions too. A custom function implements ManagerFunctions.Input1 or ManagerFunctions.InputN, depending on the number of inputs it takes. It is then added to the ManagerFunctions in Equation by call add(). The output matrix should also be resized.
[https://github.com/lessthanoptimal/ejml/blob/equation/examples/src/org/ejml/example/EquationCustomFunction.java Custom Function Example]
f35800e08dcd6f7f3cc977f47226a80e748b1651
Example Principal Component Analysis
0
13
40
2015-03-22T05:30:30Z
Peter
1
Created page with "Principal Component Analysis (PCA) is a popular and simple to implement classification technique, often used in face recognition. The following is an example of how to implem..."
wikitext
text/x-wiki
Principal Component Analysis (PCA) is a popular and simple to implement classification technique, often used in face recognition. The following is an example of how to implement it in EJML using the procedural interface. It is assumed that the reader is already familiar with PCA.
Example on GitHub:
* [https://github.com/lessthanoptimal/ejml/blob/master/examples/src/org/ejml/example/PrincipalComponentAnalysis.java PrincipalComponentAnalysis]
For additional information on PCA:
* [http://en.wikipedia.org/wiki/Principal_component_analysis General information on Wikipedia]
= Sample Code =
<syntaxhighlight lang="java">
/**
* <p>
* The following is a simple example of how to perform basic principal component analysis in EJML.
* </p>
*
* <p>
* Principal Component Analysis (PCA) is typically used to develop a linear model for a set of data
* (e.g. face images) which can then be used to test for membership. PCA works by converting the
* set of data to a new basis that is a subspace of the original set. The subspace is selected
* to maximize information.
* </p>
* <p>
* PCA is typically derived as an eigenvalue problem. However in this implementation {@link org.ejml.interfaces.decomposition.SingularValueDecomposition SVD}
* is used instead because it will produce a more numerically stable solution. Computation using EVD requires explicitly
* computing the variance of each sample set. The variance is computed by squaring the residual, which can
* cause loss of precision.
* </p>
*
* <p>
* Usage:<br>
* 1) call setup()<br>
* 2) For each sample (e.g. an image ) call addSample()<br>
* 3) After all the samples have been added call computeBasis()<br>
* 4) Call sampleToEigenSpace() , eigenToSampleSpace() , errorMembership() , response()
* </p>
*
* @author Peter Abeles
*/
public class PrincipalComponentAnalysis {
// principal component subspace is stored in the rows
private DenseMatrix64F V_t;
// how many principal components are used
private int numComponents;
// where the data is stored
private DenseMatrix64F A = new DenseMatrix64F(1,1);
private int sampleIndex;
// mean values of each element across all the samples
double mean[];
public PrincipalComponentAnalysis() {
}
/**
* Must be called before any other functions. Declares and sets up internal data structures.
*
* @param numSamples Number of samples that will be processed.
* @param sampleSize Number of elements in each sample.
*/
public void setup( int numSamples , int sampleSize ) {
mean = new double[ sampleSize ];
A.reshape(numSamples,sampleSize,false);
sampleIndex = 0;
numComponents = -1;
}
/**
* Adds a new sample of the raw data to internal data structure for later processing. All the samples
* must be added before computeBasis is called.
*
* @param sampleData Sample from original raw data.
*/
public void addSample( double[] sampleData ) {
if( A.getNumCols() != sampleData.length )
throw new IllegalArgumentException("Unexpected sample size");
if( sampleIndex >= A.getNumRows() )
throw new IllegalArgumentException("Too many samples");
for( int i = 0; i < sampleData.length; i++ ) {
A.set(sampleIndex,i,sampleData[i]);
}
sampleIndex++;
}
/**
* Computes a basis (the principal components) from the most dominant eigenvectors.
*
* @param numComponents Number of vectors it will use to describe the data. Typically much
* smaller than the number of elements in the input vector.
*/
public void computeBasis( int numComponents ) {
if( numComponents > A.getNumCols() )
throw new IllegalArgumentException("More components requested that the data's length.");
if( sampleIndex != A.getNumRows() )
throw new IllegalArgumentException("Not all the data has been added");
if( numComponents > sampleIndex )
throw new IllegalArgumentException("More data needed to compute the desired number of components");
this.numComponents = numComponents;
// compute the mean of all the samples
for( int i = 0; i < A.getNumRows(); i++ ) {
for( int j = 0; j < mean.length; j++ ) {
mean[j] += A.get(i,j);
}
}
for( int j = 0; j < mean.length; j++ ) {
mean[j] /= A.getNumRows();
}
// subtract the mean from the original data
for( int i = 0; i < A.getNumRows(); i++ ) {
for( int j = 0; j < mean.length; j++ ) {
A.set(i,j,A.get(i,j)-mean[j]);
}
}
// Compute SVD and save time by not computing U
SingularValueDecomposition<DenseMatrix64F> svd =
DecompositionFactory.svd(A.numRows, A.numCols, false, true, false);
if( !svd.decompose(A) )
throw new RuntimeException("SVD failed");
V_t = svd.getV(null,true);
DenseMatrix64F W = svd.getW(null);
// Singular values are in an arbitrary order initially
SingularOps.descendingOrder(null,false,W,V_t,true);
// strip off unneeded components and find the basis
V_t.reshape(numComponents,mean.length,true);
}
/**
* Returns a vector from the PCA's basis.
*
* @param which Which component's vector is to be returned.
* @return Vector from the PCA basis.
*/
public double[] getBasisVector( int which ) {
if( which < 0 || which >= numComponents )
throw new IllegalArgumentException("Invalid component");
DenseMatrix64F v = new DenseMatrix64F(1,A.numCols);
CommonOps.extract(V_t,which,which+1,0,A.numCols,v,0,0);
return v.data;
}
/**
* Converts a vector from sample space into eigen space.
*
* @param sampleData Sample space data.
* @return Eigen space projection.
*/
public double[] sampleToEigenSpace( double[] sampleData ) {
if( sampleData.length != A.getNumCols() )
throw new IllegalArgumentException("Unexpected sample length");
DenseMatrix64F mean = DenseMatrix64F.wrap(A.getNumCols(),1,this.mean);
DenseMatrix64F s = new DenseMatrix64F(A.getNumCols(),1,true,sampleData);
DenseMatrix64F r = new DenseMatrix64F(numComponents,1);
CommonOps.subtract(s, mean, s);
CommonOps.mult(V_t,s,r);
return r.data;
}
/**
* Converts a vector from eigen space into sample space.
*
* @param eigenData Eigen space data.
* @return Sample space projection.
*/
public double[] eigenToSampleSpace( double[] eigenData ) {
if( eigenData.length != numComponents )
throw new IllegalArgumentException("Unexpected sample length");
DenseMatrix64F s = new DenseMatrix64F(A.getNumCols(),1);
DenseMatrix64F r = DenseMatrix64F.wrap(numComponents,1,eigenData);
CommonOps.multTransA(V_t,r,s);
DenseMatrix64F mean = DenseMatrix64F.wrap(A.getNumCols(),1,this.mean);
CommonOps.add(s,mean,s);
return s.data;
}
/**
* <p>
* The membership error for a sample. If the error is less than a threshold then
* it can be considered a member. The threshold's value depends on the data set.
* </p>
* <p>
* The error is computed by projecting the sample into eigenspace then projecting
* it back into sample space and
* </p>
*
* @param sampleA The sample whose membership status is being considered.
* @return Its membership error.
*/
public double errorMembership( double[] sampleA ) {
double[] eig = sampleToEigenSpace(sampleA);
double[] reproj = eigenToSampleSpace(eig);
double total = 0;
for( int i = 0; i < reproj.length; i++ ) {
double d = sampleA[i] - reproj[i];
total += d*d;
}
return Math.sqrt(total);
}
/**
* Computes the dot product of each basis vector against the sample. Can be used as a measure
* for membership in the training sample set. High values correspond to a better fit.
*
* @param sample Sample of original data.
* @return Higher value indicates it is more likely to be a member of input dataset.
*/
public double response( double[] sample ) {
if( sample.length != A.numCols )
throw new IllegalArgumentException("Expected input vector to be in sample space");
DenseMatrix64F dots = new DenseMatrix64F(numComponents,1);
DenseMatrix64F s = DenseMatrix64F.wrap(A.numCols,1,sample);
CommonOps.mult(V_t,s,dots);
return NormOps.normF(dots);
}
}
</syntaxhighlight>
b016855b9d96fb8f79b13a6412b925b540075759
Example Polynomial Fitting
0
14
42
2015-03-22T05:34:19Z
Peter
1
Created page with "In this example it is shown how EJML can be used to fit a polynomial of arbitrary degree to a set of data. The key concepts shown here are; 1) how to create a linear using Li..."
wikitext
text/x-wiki
In this example it is shown how EJML can be used to fit a polynomial of arbitrary degree to a set of data. The key concepts shown here are; 1) how to create a linear using LinearSolverFactory, 2) use an adjustable linear solver, 3) and effective matrix reshaping. This is all done using the procedural interface.
First a best fit polynomial is fit to a set of data and then a outliers are removed from the observation set and the coefficients recomputed. Outliers are removed efficiently using an adjustable solver that does not resolve the whole system again.
Example on GitHub:
* [https://github.com/lessthanoptimal/ejml/blob/master/examples/src/org/ejml/example/PolynomialFit.java PolynomialFit]
= PolynomialFit Example Code =
<syntaxhighlight lang="java">
/**
* <p>
* This example demonstrates how a polynomial can be fit to a set of data. This is done by
* using a least squares solver that is adjustable. By using an adjustable solver elements
* can be inexpensively removed and the coefficients recomputed. This is much less expensive
* than resolving the whole system from scratch.
* </p>
* <p>
* The following is demonstrated:<br>
* <ol>
* <li>Creating a solver using LinearSolverFactory</li>
* <li>Using an adjustable solver</li>
* <li>reshaping</li>
* </ol>
* @author Peter Abeles
*/
public class PolynomialFit {
// Vandermonde matrix
DenseMatrix64F A;
// matrix containing computed polynomial coefficients
DenseMatrix64F coef;
// observation matrix
DenseMatrix64F y;
// solver used to compute
AdjustableLinearSolver solver;
/**
* Constructor.
*
* @param degree The polynomial's degree which is to be fit to the observations.
*/
public PolynomialFit( int degree ) {
coef = new DenseMatrix64F(degree+1,1);
A = new DenseMatrix64F(1,degree+1);
y = new DenseMatrix64F(1,1);
// create a solver that allows elements to be added or removed efficiently
solver = LinearSolverFactory.adjustable();
}
/**
* Returns the computed coefficients
*
* @return polynomial coefficients that best fit the data.
*/
public double[] getCoef() {
return coef.data;
}
/**
* Computes the best fit set of polynomial coefficients to the provided observations.
*
* @param samplePoints where the observations were sampled.
* @param observations A set of observations.
*/
public void fit( double samplePoints[] , double[] observations ) {
// Create a copy of the observations and put it into a matrix
y.reshape(observations.length,1,false);
System.arraycopy(observations,0, y.data,0,observations.length);
// reshape the matrix to avoid unnecessarily declaring new memory
// save values is set to false since its old values don't matter
A.reshape(y.numRows, coef.numRows,false);
// set up the A matrix
for( int i = 0; i < observations.length; i++ ) {
double obs = 1;
for( int j = 0; j < coef.numRows; j++ ) {
A.set(i,j,obs);
obs *= samplePoints[i];
}
}
// process the A matrix and see if it failed
if( !solver.setA(A) )
throw new RuntimeException("Solver failed");
// solver the the coefficients
solver.solve(y,coef);
}
/**
* Removes the observation that fits the model the worst and recomputes the coefficients.
* This is done efficiently by using an adjustable solver. Often times the elements with
* the largest errors are outliers and not part of the system being modeled. By removing them
* a more accurate set of coefficients can be computed.
*/
public void removeWorstFit() {
// find the observation with the most error
int worstIndex=-1;
double worstError = -1;
for( int i = 0; i < y.numRows; i++ ) {
double predictedObs = 0;
for( int j = 0; j < coef.numRows; j++ ) {
predictedObs += A.get(i,j)*coef.get(j,0);
}
double error = Math.abs(predictedObs- y.get(i,0));
if( error > worstError ) {
worstError = error;
worstIndex = i;
}
}
// nothing left to remove, so just return
if( worstIndex == -1 )
return;
// remove that observation
removeObservation(worstIndex);
// update A
solver.removeRowFromA(worstIndex);
// solve for the parameters again
solver.solve(y,coef);
}
/**
* Removes an element from the observation matrix.
*
* @param index which element is to be removed
*/
private void removeObservation( int index ) {
final int N = y.numRows-1;
final double d[] = y.data;
// shift
for( int i = index; i < N; i++ ) {
d[i] = d[i+1];
}
y.numRows--;
}
}
</syntaxhighlight>
a6b6cb654d3b84e13d2ecab1b00bef747eecd71d
Example Polynomial Roots
0
15
43
2015-03-22T05:36:18Z
Peter
1
Created page with "Eigenvalue decomposition can be used to find the roots in a polynomial by constructing the so called [http://en.wikipedia.org/wiki/Companion_matrix companion matrix]. While f..."
wikitext
text/x-wiki
Eigenvalue decomposition can be used to find the roots in a polynomial by constructing the so called [http://en.wikipedia.org/wiki/Companion_matrix companion matrix]. While faster techniques do exist for root finding, this is one of the most stable and probably the easiest to implement.
Because the companion matrix is not symmetric a generalized eigenvalue [MatrixDecomposition decomposition] is needed. The roots of the polynomial may also be [http://en.wikipedia.org/wiki/Complex_number complex]. Complex eigenvalues is the only instance in which EJML supports complex arithmetic. Depending on the application one might need to check to see if the eigenvalues are real or complex.
Example on GitHub:
* [https://github.com/lessthanoptimal/ejml/blob/master/examples/src/org/ejml/example/PolynomialRootFinder.java PolynomialRootFinder]
= Example Code =
<syntaxhighlight lang="java">
public class PolynomialRootFinder {
/**
* <p>
* Given a set of polynomial coefficients, compute the roots of the polynomial. Depending on
* the polynomial being considered the roots may contain complex number. When complex numbers are
* present they will come in pairs of complex conjugates.
* </p>
*
* <p>
* Coefficients are ordered from least to most significant, e.g: y = c[0] + x*c[1] + x*x*c[2].
* </p>
*
* @param coefficients Coefficients of the polynomial.
* @return The roots of the polynomial
*/
public static Complex64F[] findRoots(double... coefficients) {
int N = coefficients.length-1;
// Construct the companion matrix
DenseMatrix64F c = new DenseMatrix64F(N,N);
double a = coefficients[N];
for( int i = 0; i < N; i++ ) {
c.set(i,N-1,-coefficients[i]/a);
}
for( int i = 1; i < N; i++ ) {
c.set(i,i-1,1);
}
// use generalized eigenvalue decomposition to find the roots
EigenDecomposition<DenseMatrix64F> evd = DecompositionFactory.eig(N,false);
evd.decompose(c);
Complex64F[] roots = new Complex64F[N];
for( int i = 0; i < N; i++ ) {
roots[i] = evd.getEigenvalue(i);
}
return roots;
}
}
</syntaxhighlight>
fea3e60ca871a8d8c1e3d657cf04e3e14b099d96
44
43
2015-03-22T05:37:08Z
Peter
1
wikitext
text/x-wiki
Eigenvalue decomposition can be used to find the roots in a polynomial by constructing the so called [http://en.wikipedia.org/wiki/Companion_matrix companion matrix]. While faster techniques do exist for root finding, this is one of the most stable and probably the easiest to implement.
Because the companion matrix is not symmetric a generalized eigenvalue [MatrixDecomposition decomposition] is needed. The roots of the polynomial may also be [http://en.wikipedia.org/wiki/Complex_number complex].
Example on GitHub:
* [https://github.com/lessthanoptimal/ejml/blob/master/examples/src/org/ejml/example/PolynomialRootFinder.java PolynomialRootFinder]
= Example Code =
<syntaxhighlight lang="java">
public class PolynomialRootFinder {
/**
* <p>
* Given a set of polynomial coefficients, compute the roots of the polynomial. Depending on
* the polynomial being considered the roots may contain complex number. When complex numbers are
* present they will come in pairs of complex conjugates.
* </p>
*
* <p>
* Coefficients are ordered from least to most significant, e.g: y = c[0] + x*c[1] + x*x*c[2].
* </p>
*
* @param coefficients Coefficients of the polynomial.
* @return The roots of the polynomial
*/
public static Complex64F[] findRoots(double... coefficients) {
int N = coefficients.length-1;
// Construct the companion matrix
DenseMatrix64F c = new DenseMatrix64F(N,N);
double a = coefficients[N];
for( int i = 0; i < N; i++ ) {
c.set(i,N-1,-coefficients[i]/a);
}
for( int i = 1; i < N; i++ ) {
c.set(i,i-1,1);
}
// use generalized eigenvalue decomposition to find the roots
EigenDecomposition<DenseMatrix64F> evd = DecompositionFactory.eig(N,false);
evd.decompose(c);
Complex64F[] roots = new Complex64F[N];
for( int i = 0; i < N; i++ ) {
roots[i] = evd.getEigenvalue(i);
}
return roots;
}
}
</syntaxhighlight>
288988aad993e8bc178a76afd413739879d8098f
Example Customizing SimpleMatrix
0
16
48
2015-03-22T05:54:09Z
Peter
1
Created page with " [[SimpleMatrix]] provides an easy to use object oriented way of doing linear algebra. There are many other problems which use matrices and could use SimpleMatrix's functiona..."
wikitext
text/x-wiki
[[SimpleMatrix]] provides an easy to use object oriented way of doing linear algebra. There are many other problems which use matrices and could use SimpleMatrix's functionality. In those situations it is desirable to simply extend SimpleMatrix and add additional functions as needed.
Naively extending SimpleMatrix is problematic because internally SimpleMatrix creates new matrices and its functions returned objects of the wrong type. To get around these problems SimpleBase is extended instead and its abstract functions implemented. SimpleBase provides all the core functionality of SimpleMatrix, with the exception of its static functions.
An example is provided below where a new class called StatisticsMatrix is created that adds statistical functions to SimpleMatrix. Usage examples are provided in its main() function.
Code on GitHub:
* [https://github.com/lessthanoptimal/ejml/blob/master/examples/src/org/ejml/example/StatisticsMatrix.java StatisticsMatrix]
<syntaxhighlight lang="java">
/**
* Example of how to extend "SimpleMatrix" and add your own functionality. In this case
* two basic statistic operations are added. Since SimpleBase is extended and StatisticsMatrix
* is specified as the generics type, all "SimpleMatrix" operations return a matrix of
* type StatisticsMatrix, ensuring strong typing.
*
* @author Peter Abeles
*/
public class StatisticsMatrix extends SimpleBase<StatisticsMatrix> {
public StatisticsMatrix( int numRows , int numCols ) {
super(numRows,numCols);
}
protected StatisticsMatrix(){}
/**
* Wraps a StatisticsMatrix around 'm'. Does NOT create a copy of 'm' but saves a reference
* to it.
*/
public static StatisticsMatrix wrap( DenseMatrix64F m ) {
StatisticsMatrix ret = new StatisticsMatrix();
ret.mat = m;
return ret;
}
/**
* Computes the mean or average of all the elements.
*
* @return mean
*/
public double mean() {
double total = 0;
final int N = getNumElements();
for( int i = 0; i < N; i++ ) {
total += get(i);
}
return total/N;
}
/**
* Computes the unbiased standard deviation of all the elements.
*
* @return standard deviation
*/
public double stdev() {
double m = mean();
double total = 0;
final int N = getNumElements();
if( N <= 1 )
throw new IllegalArgumentException("There must be more than one element to compute stdev");
for( int i = 0; i < N; i++ ) {
double x = get(i);
total += (x - m)*(x - m);
}
total /= (N-1);
return Math.sqrt(total);
}
/**
* Returns a matrix of StatisticsMatrix type so that SimpleMatrix functions create matrices
* of the correct type.
*/
@Override
protected StatisticsMatrix createMatrix(int numRows, int numCols) {
return new StatisticsMatrix(numRows,numCols);
}
public static void main( String args[] ) {
Random rand = new Random(24234);
int N = 500;
// create two vectors whose elements are drawn from uniform distributions
StatisticsMatrix A = StatisticsMatrix.wrap(RandomMatrices.createRandom(N,1,0,1,rand));
StatisticsMatrix B = StatisticsMatrix.wrap(RandomMatrices.createRandom(N,1,1,2,rand));
// the mean should be about 0.5
System.out.println("Mean of A is "+A.mean());
// the mean should be about 1.5
System.out.println("Mean of B is "+B.mean());
StatisticsMatrix C = A.plus(B);
// the mean should be about 2.0
System.out.println("Mean of C = A + B is "+C.mean());
System.out.println("Standard deviation of A is "+A.stdev());
System.out.println("Standard deviation of B is "+B.stdev());
System.out.println("Standard deviation of C is "+C.stdev());
}
}
</syntaxhighlight>
601bbb860b5ff698e7498af6eaecc7819d09eace
Example Fixed Sized Matrices
0
17
49
2015-03-22T05:56:13Z
Peter
1
Created page with "Array access adds a significant amount of overhead to matrix operations. A fixed sized matrix gets around that issue by having each element in the matrix be a variable in the..."
wikitext
text/x-wiki
Array access adds a significant amount of overhead to matrix operations. A fixed sized matrix gets around that issue by having each element in the matrix be a variable in the class. EJML provides support for fixed sized matrices and vectors up to 6x6, at which point it loses its advantage. The example below demonstrates how to use a fixed sized matrix and convert to other matrix types in EJML.
Code on GitHub:
* [https://github.com/lessthanoptimal/ejml/blob/master/examples/src/org/ejml/example/ExampleFixedSizedMatrix.java ExampleFixedSizedMatrix]
== Fixed Matrix Example ==
<syntaxhighlight lang="java">
/**
* In some applications a small fixed sized matrix can speed things up a lot, e.g. 8 times faster. One application
* which uses small matrices is graphics and rigid body motion, which extensively uses 3x3 and 4x4 matrices. This
* example is to show some examples of how you can use a fixed sized matrix.
*
* @author Peter Abeles
*/
public class ExampleFixedSizedMatrix {
public static void main( String args[] ) {
// declare the matrix
FixedMatrix3x3_64F a = new FixedMatrix3x3_64F();
FixedMatrix3x3_64F b = new FixedMatrix3x3_64F();
// Can assign values the usual way
for( int i = 0; i < 3; i++ ) {
for( int j = 0; j < 3; j++ ) {
a.set(i,j,i+j+1);
}
}
// Direct manipulation of each value is the fastest way to assign/read values
a.a11 = 12;
a.a23 = 64;
// can print the usual way too
a.print();
// most of the standard operations are support
FixedOps3.transpose(a,b);
b.print();
System.out.println("Determinant = "+FixedOps3.det(a));
// matrix-vector operations are also supported
// Constructors for vectors and matrices can be used to initialize its value
FixedMatrix3_64F v = new FixedMatrix3_64F(1,2,3);
FixedMatrix3_64F result = new FixedMatrix3_64F();
FixedOps3.mult(a,v,result);
// Conversion into DenseMatrix64F can also be done
DenseMatrix64F dm = ConvertMatrixType.convert(a,null);
dm.print();
// This can be useful if you need do more advanced operations
SimpleMatrix sv = SimpleMatrix.wrap(dm).svd().getV();
// can then convert it back into a fixed matrix
FixedMatrix3x3_64F fv = ConvertMatrixType.convert(sv.getMatrix(),(FixedMatrix3x3_64F)null);
System.out.println("Original simple matrix and converted fixed matrix");
sv.print();
fv.print();
}
}
</syntaxhighlight>
a5a88ba971be2e6e310acdf034560caf5f0a4255
Equations
0
18
51
2015-03-22T06:08:53Z
Peter
1
Created page with "Writing succinct and readable linear algebra code in Java, using any library, is problematic. Originally EJML just offered two API's for performing linear algebra. A procedu..."
wikitext
text/x-wiki
Writing succinct and readable linear algebra code in Java, using any library, is problematic. Originally EJML just offered two API's for performing linear algebra. A procedural API which provided complete control over memory and which algorithms were used, but was verbose and has a sharper learning curve. Alternatively you could use an object oriented API (SimpleMatrix) but lose control over memory and it has a limited set of operators. Neither of these API's produces code which is all that similar to how equations are written mathematically.
Languages, such as Matlab, are specifically designed for processing matrices and are much closer to mathematical notation. C++ offers the ability to overload operators allowing for more natural code, see [http://eigen.tuxfamily.org Eigen]. To overcome this problem EJML now provides the _Equation_ API, which allows a Matlab/Octave like notation to be used.
This is achieved by parsing text strings with equations and converting it into a set of executable instructions, see the usage example below:
<pre>
eq.process("K = P*H'*inv( H*P*H' + R )");
</pre>
It is easy to see that the equations code is the compact and readable. While the syntax is heavily inspired by Matlab and its kin, it does not attempt to replicate their functionality. It is also not a replacement for SimpleMatrix or the procedural API. There are situations where those other interfaces are easier to use and most programs would need to use a mix.
Equations is designed to have minimal overhead. It runs almost as fast as the procedural API and can be used such that all memory is predeclared.
----
__TOC__
= Quick Start =
The syntax used in Equation is very similar to Matlab and other computer algebra systems (CAS). It is assumed the reader is already familiar with these systems and can quickly pick up the syntax through these examples.
Let's start with a complete simple example then explain what's going on line by line.
<pre>
01: public void updateP( DenseMatrix64F P , DenseMatrix64F F , DenseMatrix64F Q ) {
02: Equation eq = new Equation();
03: eq.alias(P,"P",F,"F",Q,"Q");
04: eq.process("S = F*P*F'");
05: eq.process("P = S + Q");
06: }
</pre>
'''Line 02:''' Declare the Equation class.<br>
'''Line 03:''' Create aliases for each variable. This allowed Equation to reference and manipulate those classes.<br>
'''Line 04:''' Process() is called and passed in a text string with an equation in it. The variable 'S' is lazily created and set to the result of F*P*F'.<br>
'''Line 05:''' Process() is called again and P is set to the result of adding S and Q together. Because P is aliased to the input matrix P that matrix is changed.
Three types of variables are supported, matrix, double, and integer. Results can be stored in each type and all can be aliased. The example below uses all 3 data types and to compute the likelihood of "x" from a multivariable normal distribution defined by matrices 'mu' and 'P'.
<syntaxhighlight lang="java">
eq.alias(x.numRows,"k",P,"P",x,"x",mu,"mu");
eq.process("p = (2*pi)^(-k/2)/sqrt(det(P))*exp(-0.5*(x-mu)'*inv(P)*(x-mu))");
}}}
The end result 'p' will be a double. There was no need to alias 'pi' since it's a built in constant. Since 'p' is lazily defined how do you access the result?
{{{
double p = eq.lookupDouble("p");
}}}
For a matrix you could use eq.lookupMatrix() and eq.lookupInteger() for integers. If you don't know the variable's type then eq.lookupVariable() is what you need.
It is also possible to define a matrix inline:
{{{
eq.process("P = [10 0 0;0 10 0;0 0 10]");
}}}
Will assign P to a 3x3 matrix with 10's all along it's diagonal. Other matrices can also be included inside:
{{{
eq.process("P = [A ; B]");
}}}
will concatenate A and B horizontally.
Submatrices are also supported for assignment and reference.
{{{
eq.process("P(2:5,0:3) = 10*A(1:4,10:13)");
}}}
P(2:5,0:3) references the sub-matrix inside of P from rows 2 to 5 (inclusive) and columns 0 to 3 (inclusive).
This concludes the quick start tutorial. The remaining sections will go into more detail on each of the subjects touched upon above.
= The Compiler =
The current compiler is very basic and performs very literal translations of equations into code. For example, "A = 2.5*B*C'" could be executed with a single call to CommonOps.multTransB(). Instead it will transpose C, save the result, then scale B by 2.5, save the result, multiply the results together, save that, and finally copy the result into A. In the future the compiler will become smart enough to recognize such patterns.
Compiling the text string contains requires a bit of overhead but once compiled it can be run with very fast. When dealing with larger matrices the overhead involved is insignificant, but for smaller ones it can have a noticeable impact. This is why the ability to precompile an equation is provided.
Original:
{{{
eq.process("K = P*H'*inv( H*P*H' + R )");
}}}
Precompiled:
{{{
// precompile the equation
Sequence s = eq.compile("K = P*H'*inv( H*P*H' + R )");
// execute the results with out needing to recompile
s.perform();
}}}
Both are equivalent, but if an equation is invoked multiple times the precompiled version can have a noticable improvement in performance. Using precompiled sequences also means that means that internal arrays are only declared once and allows the user to control when memory is created/destroyed.
To make it clear, precompiling is only recommended when dealing with smaller matrices or when tighter control over memory is required.
When an equation is precompiled you can still change the alias for a variable.
{{{
eq.alias(0,"sum",0,"i");
Sequence s = eq.compile("sum = sum + i");
for( int i = 0; i < 10; i++ ) {
eq.alias(i,"i");
s.perform();
}
}}}
This will sum up the numbers from 0 to 9.
== Debugging ==
There will be times when you pass in an equation and it throws some weird exception or just doesn't do what you expected. To see the tokens and sequence of operations set the second parameter in compile() or peform() to true.
For example:
{{{
eq.process("y = z - H*x",true);
}}}
When application is run it will print out
{{{
Parsed tokens:
------------
VarMATRIX
ASSIGN
VarMATRIX
MINUS
VarMATRIX
TIMES
VarMATRIX
Operations:
------------
multiply-mm
subtract-mm
copy-mm
}}}
= Alias =
To manipulate matrices in equations they need to be aliased. Both DenseMatrix64F and SimpleMatrix can be aliased. A copy of scalar numbers can also be aliased. When a variable is aliased a reference to the data is saved and a name associated with it.
{{{
DenseMatrix64F x = new DenseMatrix64F(6,1);
eq.alias(x,"x");
}}}
Multiple variables can be aliased at the same time too
{{{
eq.alias(x,"x",P,"P",h,"Happy");
}}}
As is shown above the string name for a variable does not have to be the same as Java name of the variable. Here is an example where an integer and double is aliased.
{{{
int a = 6;
eq.alias(2.3,"distance",a,"a");
}}}
After a variable has been aliased you can alias the same name again to change it. Here is an example of just that:
{{{
for( int i = 0; i < 10; i++ ) {
eq.alias(i,"i");
// do stuff with i
}
}}}
If after benchmarking your code and discover that the alias operation is slowing it down (a hashmap lookup is done internally) then you should consider the following faster, but uglier, alternative.
{{{
VariableInteger i = eq.lookupVariable("i");
for( i.value = 0; i.value < 10; i.value++ ) {
// do stuff with i
}
}}}
= Submatrices =
Sub-matrices can be read from and written to. It's easy to reference a sub-matrix inside of any matrix. A few examples are below.
{{{
A(1:4,0:5)
}}}
Here rows 1 to 4 (inclusive) and columns 0 to 5 (inclusive) compose the sub-matrix of A. The notation "a:b" indicates an integer set from 'a' to 'b', where 'a' and 'b' must be integers themselves. To specify every row or column do ":" or all rows and columns past a certain 'a' can be referenced with "a:". Finally, you can reference just a number by typeing it, e.g. "a".
{{{
A(3:,3) <-- Rows from 3 to the last row and just column 3
A(:,:) <-- Every element in A
A(1,2) <-- The element in A at row=1,col=2
}}}
The last example is a special case in that A(1,2) will return a double and not 1x1 matrix. Consider the following:
{{{
A(0:2,0:2) = C/B(1,2)
}}}
The results of dividing the elements of matrix C by the value of B(1,2) is assigned to the submatrix in A.
A named variable can also be used to reference elements as long as it's an integer.
{{{
a = A(i,j)
}}}
= Inline Matrix =
Matrices can be created inline and are defined inside of brackets. The matrix is specified in a row-major format, where a space separates elements in a row and a semi-colon indicates the end of a row.
{{{
[5 0 0;0 4.0 0.0 ; 0 0 1]
}}}
Defines a 3x3 matrix with 5,4,1 for it's diagonal elements. Visually this looks like:
{{{
[ 5 0 0 ]
[ 0 4 0 ]
[ 0 0 1 ]
}}}
An inline matrix can be used to concatenate other matrices together.
{{{
[ A ; B ; C ]
}}}
Will concatenate matrices A, B, and C along their rows. They must have the same number of columns. As you might guess, to concatenate along columns you would
{{{
[ A B C ]
}}}
and each matrix must have the same number of rows. Inner matrices are also allowed
{{{
[ [1 2;2 3] [4;5] ; A ]
}}}
which will result in
{{{
[ 1 2 4 ]
[ 2 3 5 ]
[ A ]
}}}
= Built in Functions and Variables =
*Constants*
<pre>
pi = Math.PI
e = Math.E
</pre>
*Functions*
<pre>
eye(N) Create an identity matrix which is N by N.
eye(A) Create an identity matrix which is A.numRows by A.numCols
normF(A) Frobenius normal of the matrix.
det(A) Determinant of the matrix
inv(A) Inverse of a matrix
pinv(A) Pseudo-inverse of a matrix
rref(A) Reduced row echelon form of A
trace(A) Trace of the matrix
zeros(r,c) Matrix full of zeros with r rows and c columns.
ones(r,c) Matrix full of ones with r rows and c columns.
diag(A) If a vector then returns a square matrix with diagonal elements filled with vector
diag(A) If a matrix then it returns the diagonal elements as a column vector
dot(A,B) Returns the dot product of two vectors as a double. Does not work on general matrices.
solve(A,B) Returns the solution X from A*X = B.
kron(A,B) Kronecker product
abs(A) Absolute value of A.
max(A) Element with the largest value in A.
min(A) Element with the smallest value in A.
pow(a,b) Scalar power of a to b. Can also be invoked with "a^b".
sin(a) Math.sin(a) for scalars only
cos(a) Math.cos(a) for scalars only
atan(a) Math.atan(a) for scalars only
atan2(a,b) Math.atan2(a,b) for scalars only
exp(a) Math.exp(a) for scalars and element-wise matrices
log(a) Math.log(a) for scalars and element-wise matrices
}}}
*Symbols*
{{{
'*' multiplication (Matrix-Matrix, Scalar-Matrix, Scalar-Scalar)
'+' addition (Matrix-Matrix, Scalar-Matrix, Scalar-Scalar)
'-' subtraction (Matrix-Matrix, Scalar-Matrix, Scalar-Scalar)
'/' divide (Matrix-Scalar, Scalar-Scalar)
'/' matrix solve "x=b/A" is equivalent to x=solve(A,b) (Matrix-Matrix)
'^' Scalar power. a^b is a to the power of b.
'\' left-divide. Same as divide but reversed. e.g. x=A\b is x=solve(A,b)
'.*' element-wise multiplication (Matrix-Matrix)
'./' element-wise division (Matrix-Matrix)
'.^' element-wise power. (scalar-scalar) (matrix-matrix) (scalar-matrix) (matrix-scalar)
'^' Scalar power. a^b is a to the power of b.
''' matrix transpose
'=' assignment by value (Matrix-Matrix, Scalar-Scalar)
</pre>
= User Defined Functions =
It's easy to add your own custom functions too. A custom function implements ManagerFunctions.Input1 or ManagerFunctions.InputN, depending on the number of inputs it takes. It is then added to the ManagerFunctions in Equation by call add(). The output matrix should also be resized.
[https://github.com/lessthanoptimal/ejml/blob/equation/examples/src/org/ejml/example/EquationCustomFunction.java Custom Function Example]
34ce83faad328dde4a75535029ce90dc6c260d51
52
51
2015-03-22T06:10:06Z
Peter
1
wikitext
text/x-wiki
Writing succinct and readable linear algebra code in Java, using any library, is problematic. Originally EJML just offered two API's for performing linear algebra. A procedural API which provided complete control over memory and which algorithms were used, but was verbose and has a sharper learning curve. Alternatively you could use an object oriented API (SimpleMatrix) but lose control over memory and it has a limited set of operators. Neither of these API's produces code which is all that similar to how equations are written mathematically.
Languages, such as Matlab, are specifically designed for processing matrices and are much closer to mathematical notation. C++ offers the ability to overload operators allowing for more natural code, see [http://eigen.tuxfamily.org Eigen]. To overcome this problem EJML now provides the _Equation_ API, which allows a Matlab/Octave like notation to be used.
This is achieved by parsing text strings with equations and converting it into a set of executable instructions, see the usage example below:
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
It is easy to see that the equations code is the compact and readable. While the syntax is heavily inspired by Matlab and its kin, it does not attempt to replicate their functionality. It is also not a replacement for SimpleMatrix or the procedural API. There are situations where those other interfaces are easier to use and most programs would need to use a mix.
Equations is designed to have minimal overhead. It runs almost as fast as the procedural API and can be used such that all memory is predeclared.
----
__TOC__
= Quick Start =
The syntax used in Equation is very similar to Matlab and other computer algebra systems (CAS). It is assumed the reader is already familiar with these systems and can quickly pick up the syntax through these examples.
Let's start with a complete simple example then explain what's going on line by line.
<pre>
01: public void updateP( DenseMatrix64F P , DenseMatrix64F F , DenseMatrix64F Q ) {
02: Equation eq = new Equation();
03: eq.alias(P,"P",F,"F",Q,"Q");
04: eq.process("S = F*P*F'");
05: eq.process("P = S + Q");
06: }
</pre>
'''Line 02:''' Declare the Equation class.<br>
'''Line 03:''' Create aliases for each variable. This allowed Equation to reference and manipulate those classes.<br>
'''Line 04:''' Process() is called and passed in a text string with an equation in it. The variable 'S' is lazily created and set to the result of F*P*F'.<br>
'''Line 05:''' Process() is called again and P is set to the result of adding S and Q together. Because P is aliased to the input matrix P that matrix is changed.
Three types of variables are supported, matrix, double, and integer. Results can be stored in each type and all can be aliased. The example below uses all 3 data types and to compute the likelihood of "x" from a multivariable normal distribution defined by matrices 'mu' and 'P'.
<syntaxhighlight lang="java">
eq.alias(x.numRows,"k",P,"P",x,"x",mu,"mu");
eq.process("p = (2*pi)^(-k/2)/sqrt(det(P))*exp(-0.5*(x-mu)'*inv(P)*(x-mu))");
</syntaxhighlight>
The end result 'p' will be a double. There was no need to alias 'pi' since it's a built in constant. Since 'p' is lazily defined how do you access the result?
<syntaxhighlight lang="java">
double p = eq.lookupDouble("p");
</syntaxhighlight>
For a matrix you could use eq.lookupMatrix() and eq.lookupInteger() for integers. If you don't know the variable's type then eq.lookupVariable() is what you need.
It is also possible to define a matrix inline:
<syntaxhighlight lang="java">
eq.process("P = [10 0 0;0 10 0;0 0 10]");
</syntaxhighlight>
Will assign P to a 3x3 matrix with 10's all along it's diagonal. Other matrices can also be included inside:
<syntaxhighlight lang="java">
eq.process("P = [A ; B]");
</syntaxhighlight>
will concatenate A and B horizontally.
Submatrices are also supported for assignment and reference.
<syntaxhighlight lang="java">
eq.process("P(2:5,0:3) = 10*A(1:4,10:13)");
</syntaxhighlight>
P(2:5,0:3) references the sub-matrix inside of P from rows 2 to 5 (inclusive) and columns 0 to 3 (inclusive).
This concludes the quick start tutorial. The remaining sections will go into more detail on each of the subjects touched upon above.
= The Compiler =
The current compiler is very basic and performs very literal translations of equations into code. For example, "A = 2.5*B*C'" could be executed with a single call to CommonOps.multTransB(). Instead it will transpose C, save the result, then scale B by 2.5, save the result, multiply the results together, save that, and finally copy the result into A. In the future the compiler will become smart enough to recognize such patterns.
Compiling the text string contains requires a bit of overhead but once compiled it can be run with very fast. When dealing with larger matrices the overhead involved is insignificant, but for smaller ones it can have a noticeable impact. This is why the ability to precompile an equation is provided.
Original:
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
Precompiled:
<syntaxhighlight lang="java">
// precompile the equation
Sequence s = eq.compile("K = P*H'*inv( H*P*H' + R )");
// execute the results with out needing to recompile
s.perform();
</syntaxhighlight>
Both are equivalent, but if an equation is invoked multiple times the precompiled version can have a noticable improvement in performance. Using precompiled sequences also means that means that internal arrays are only declared once and allows the user to control when memory is created/destroyed.
To make it clear, precompiling is only recommended when dealing with smaller matrices or when tighter control over memory is required.
When an equation is precompiled you can still change the alias for a variable.
<syntaxhighlight lang="java">
eq.alias(0,"sum",0,"i");
Sequence s = eq.compile("sum = sum + i");
for( int i = 0; i < 10; i++ ) {
eq.alias(i,"i");
s.perform();
}
</syntaxhighlight>
This will sum up the numbers from 0 to 9.
== Debugging ==
There will be times when you pass in an equation and it throws some weird exception or just doesn't do what you expected. To see the tokens and sequence of operations set the second parameter in compile() or peform() to true.
For example:
<syntaxhighlight lang="java">
eq.process("y = z - H*x",true);
</syntaxhighlight>
When application is run it will print out
<syntaxhighlight lang="java">
Parsed tokens:
------------
VarMATRIX
ASSIGN
VarMATRIX
MINUS
VarMATRIX
TIMES
VarMATRIX
Operations:
------------
multiply-mm
subtract-mm
copy-mm
</syntaxhighlight>
= Alias =
To manipulate matrices in equations they need to be aliased. Both DenseMatrix64F and SimpleMatrix can be aliased. A copy of scalar numbers can also be aliased. When a variable is aliased a reference to the data is saved and a name associated with it.
<syntaxhighlight lang="java">
DenseMatrix64F x = new DenseMatrix64F(6,1);
eq.alias(x,"x");
</syntaxhighlight>
Multiple variables can be aliased at the same time too
<syntaxhighlight lang="java">
eq.alias(x,"x",P,"P",h,"Happy");
</syntaxhighlight>
As is shown above the string name for a variable does not have to be the same as Java name of the variable. Here is an example where an integer and double is aliased.
<syntaxhighlight lang="java">
int a = 6;
eq.alias(2.3,"distance",a,"a");
</syntaxhighlight>
After a variable has been aliased you can alias the same name again to change it. Here is an example of just that:
<syntaxhighlight lang="java">
for( int i = 0; i < 10; i++ ) {
eq.alias(i,"i");
// do stuff with i
}
</syntaxhighlight>
If after benchmarking your code and discover that the alias operation is slowing it down (a hashmap lookup is done internally) then you should consider the following faster, but uglier, alternative.
<syntaxhighlight lang="java">
VariableInteger i = eq.lookupVariable("i");
for( i.value = 0; i.value < 10; i.value++ ) {
// do stuff with i
}
</syntaxhighlight>
= Submatrices =
Sub-matrices can be read from and written to. It's easy to reference a sub-matrix inside of any matrix. A few examples are below.
<syntaxhighlight lang="java">
A(1:4,0:5)
</syntaxhighlight>
Here rows 1 to 4 (inclusive) and columns 0 to 5 (inclusive) compose the sub-matrix of A. The notation "a:b" indicates an integer set from 'a' to 'b', where 'a' and 'b' must be integers themselves. To specify every row or column do ":" or all rows and columns past a certain 'a' can be referenced with "a:". Finally, you can reference just a number by typeing it, e.g. "a".
<syntaxhighlight lang="java">
A(3:,3) <-- Rows from 3 to the last row and just column 3
A(:,:) <-- Every element in A
A(1,2) <-- The element in A at row=1,col=2
</syntaxhighlight>
The last example is a special case in that A(1,2) will return a double and not 1x1 matrix. Consider the following:
<syntaxhighlight lang="java">
A(0:2,0:2) = C/B(1,2)
</syntaxhighlight>
The results of dividing the elements of matrix C by the value of B(1,2) is assigned to the submatrix in A.
A named variable can also be used to reference elements as long as it's an integer.
<syntaxhighlight lang="java">
a = A(i,j)
</syntaxhighlight>
= Inline Matrix =
Matrices can be created inline and are defined inside of brackets. The matrix is specified in a row-major format, where a space separates elements in a row and a semi-colon indicates the end of a row.
<syntaxhighlight lang="java">
[5 0 0;0 4.0 0.0 ; 0 0 1]
</syntaxhighlight>
Defines a 3x3 matrix with 5,4,1 for it's diagonal elements. Visually this looks like:
<syntaxhighlight lang="java">
[ 5 0 0 ]
[ 0 4 0 ]
[ 0 0 1 ]
</syntaxhighlight>
An inline matrix can be used to concatenate other matrices together.
<syntaxhighlight lang="java">
[ A ; B ; C ]
</syntaxhighlight>
Will concatenate matrices A, B, and C along their rows. They must have the same number of columns. As you might guess, to concatenate along columns you would
<syntaxhighlight lang="java">
[ A B C ]
</syntaxhighlight>
and each matrix must have the same number of rows. Inner matrices are also allowed
<syntaxhighlight lang="java">
[ [1 2;2 3] [4;5] ; A ]
</syntaxhighlight>
which will result in
<syntaxhighlight lang="java">
[ 1 2 4 ]
[ 2 3 5 ]
[ A ]
</syntaxhighlight>
= Built in Functions and Variables =
*Constants*
<pre>
pi = Math.PI
e = Math.E
</pre>
*Functions*
<pre>
eye(N) Create an identity matrix which is N by N.
eye(A) Create an identity matrix which is A.numRows by A.numCols
normF(A) Frobenius normal of the matrix.
det(A) Determinant of the matrix
inv(A) Inverse of a matrix
pinv(A) Pseudo-inverse of a matrix
rref(A) Reduced row echelon form of A
trace(A) Trace of the matrix
zeros(r,c) Matrix full of zeros with r rows and c columns.
ones(r,c) Matrix full of ones with r rows and c columns.
diag(A) If a vector then returns a square matrix with diagonal elements filled with vector
diag(A) If a matrix then it returns the diagonal elements as a column vector
dot(A,B) Returns the dot product of two vectors as a double. Does not work on general matrices.
solve(A,B) Returns the solution X from A*X = B.
kron(A,B) Kronecker product
abs(A) Absolute value of A.
max(A) Element with the largest value in A.
min(A) Element with the smallest value in A.
pow(a,b) Scalar power of a to b. Can also be invoked with "a^b".
sin(a) Math.sin(a) for scalars only
cos(a) Math.cos(a) for scalars only
atan(a) Math.atan(a) for scalars only
atan2(a,b) Math.atan2(a,b) for scalars only
exp(a) Math.exp(a) for scalars and element-wise matrices
log(a) Math.log(a) for scalars and element-wise matrices
</syntaxhighlight>
*Symbols*
<syntaxhighlight lang="java">
'*' multiplication (Matrix-Matrix, Scalar-Matrix, Scalar-Scalar)
'+' addition (Matrix-Matrix, Scalar-Matrix, Scalar-Scalar)
'-' subtraction (Matrix-Matrix, Scalar-Matrix, Scalar-Scalar)
'/' divide (Matrix-Scalar, Scalar-Scalar)
'/' matrix solve "x=b/A" is equivalent to x=solve(A,b) (Matrix-Matrix)
'^' Scalar power. a^b is a to the power of b.
'\' left-divide. Same as divide but reversed. e.g. x=A\b is x=solve(A,b)
'.*' element-wise multiplication (Matrix-Matrix)
'./' element-wise division (Matrix-Matrix)
'.^' element-wise power. (scalar-scalar) (matrix-matrix) (scalar-matrix) (matrix-scalar)
'^' Scalar power. a^b is a to the power of b.
''' matrix transpose
'=' assignment by value (Matrix-Matrix, Scalar-Scalar)
</pre>
= User Defined Functions =
It's easy to add your own custom functions too. A custom function implements ManagerFunctions.Input1 or ManagerFunctions.InputN, depending on the number of inputs it takes. It is then added to the ManagerFunctions in Equation by call add(). The output matrix should also be resized.
[https://github.com/lessthanoptimal/ejml/blob/equation/examples/src/org/ejml/example/EquationCustomFunction.java Custom Function Example]
f3ae9c620111e77683a029a4aeea46ac7c60e091
97
52
2015-04-01T02:36:26Z
Peter
1
wikitext
text/x-wiki
Writing succinct and readable linear algebra code in Java, using any library, is problematic. Originally EJML just offered two API's for performing linear algebra. A procedural API which provided complete control over memory and which algorithms were used, but was verbose and has a sharper learning curve. Alternatively you could use an object oriented API (SimpleMatrix) but lose control over memory and it has a limited set of operators. Neither of these API's produces code which is all that similar to how equations are written mathematically.
Languages, such as Matlab, are specifically designed for processing matrices and are much closer to mathematical notation. C++ offers the ability to overload operators allowing for more natural code, see [http://eigen.tuxfamily.org Eigen]. To overcome this problem EJML now provides the _Equation_ API, which allows a Matlab/Octave like notation to be used.
This is achieved by parsing text strings with equations and converting it into a set of executable instructions, see the usage example below:
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
It is easy to see that the equations code is the compact and readable. While the syntax is heavily inspired by Matlab and its kin, it does not attempt to replicate their functionality. It is also not a replacement for SimpleMatrix or the procedural API. There are situations where those other interfaces are easier to use and most programs would need to use a mix.
Equations is designed to have minimal overhead. It runs almost as fast as the procedural API and can be used such that all memory is predeclared.
----
__TOC__
= Quick Start =
The syntax used in Equation is very similar to Matlab and other computer algebra systems (CAS). It is assumed the reader is already familiar with these systems and can quickly pick up the syntax through these examples.
Let's start with a complete simple example then explain what's going on line by line.
<pre>
01: public void updateP( DenseMatrix64F P , DenseMatrix64F F , DenseMatrix64F Q ) {
02: Equation eq = new Equation();
03: eq.alias(P,"P",F,"F",Q,"Q");
04: eq.process("S = F*P*F'");
05: eq.process("P = S + Q");
06: }
</pre>
'''Line 02:''' Declare the Equation class.<br>
'''Line 03:''' Create aliases for each variable. This allowed Equation to reference and manipulate those classes.<br>
'''Line 04:''' Process() is called and passed in a text string with an equation in it. The variable 'S' is lazily created and set to the result of F*P*F'.<br>
'''Line 05:''' Process() is called again and P is set to the result of adding S and Q together. Because P is aliased to the input matrix P that matrix is changed.
Three types of variables are supported, matrix, double, and integer. Results can be stored in each type and all can be aliased. The example below uses all 3 data types and to compute the likelihood of "x" from a multivariable normal distribution defined by matrices 'mu' and 'P'.
<syntaxhighlight lang="java">
eq.alias(x.numRows,"k",P,"P",x,"x",mu,"mu");
eq.process("p = (2*pi)^(-k/2)/sqrt(det(P))*exp(-0.5*(x-mu)'*inv(P)*(x-mu))");
</syntaxhighlight>
The end result 'p' will be a double. There was no need to alias 'pi' since it's a built in constant. Since 'p' is lazily defined how do you access the result?
<syntaxhighlight lang="java">
double p = eq.lookupDouble("p");
</syntaxhighlight>
For a matrix you could use eq.lookupMatrix() and eq.lookupInteger() for integers. If you don't know the variable's type then eq.lookupVariable() is what you need.
It is also possible to define a matrix inline:
<syntaxhighlight lang="java">
eq.process("P = [10 0 0;0 10 0;0 0 10]");
</syntaxhighlight>
Will assign P to a 3x3 matrix with 10's all along it's diagonal. Other matrices can also be included inside:
<syntaxhighlight lang="java">
eq.process("P = [A ; B]");
</syntaxhighlight>
will concatenate A and B horizontally.
Submatrices are also supported for assignment and reference.
<syntaxhighlight lang="java">
eq.process("P(2:5,0:3) = 10*A(1:4,10:13)");
</syntaxhighlight>
P(2:5,0:3) references the sub-matrix inside of P from rows 2 to 5 (inclusive) and columns 0 to 3 (inclusive).
This concludes the quick start tutorial. The remaining sections will go into more detail on each of the subjects touched upon above.
= The Compiler =
The current compiler is very basic and performs very literal translations of equations into code. For example, "A = 2.5*B*C'" could be executed with a single call to CommonOps.multTransB(). Instead it will transpose C, save the result, then scale B by 2.5, save the result, multiply the results together, save that, and finally copy the result into A. In the future the compiler will become smart enough to recognize such patterns.
Compiling the text string contains requires a bit of overhead but once compiled it can be run with very fast. When dealing with larger matrices the overhead involved is insignificant, but for smaller ones it can have a noticeable impact. This is why the ability to precompile an equation is provided.
Original:
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
Precompiled:
<syntaxhighlight lang="java">
// precompile the equation
Sequence s = eq.compile("K = P*H'*inv( H*P*H' + R )");
// execute the results with out needing to recompile
s.perform();
</syntaxhighlight>
Both are equivalent, but if an equation is invoked multiple times the precompiled version can have a noticable improvement in performance. Using precompiled sequences also means that means that internal arrays are only declared once and allows the user to control when memory is created/destroyed.
To make it clear, precompiling is only recommended when dealing with smaller matrices or when tighter control over memory is required.
When an equation is precompiled you can still change the alias for a variable.
<syntaxhighlight lang="java">
eq.alias(0,"sum",0,"i");
Sequence s = eq.compile("sum = sum + i");
for( int i = 0; i < 10; i++ ) {
eq.alias(i,"i");
s.perform();
}
</syntaxhighlight>
This will sum up the numbers from 0 to 9.
== Debugging ==
There will be times when you pass in an equation and it throws some weird exception or just doesn't do what you expected. To see the tokens and sequence of operations set the second parameter in compile() or peform() to true.
For example:
<syntaxhighlight lang="java">
eq.process("y = z - H*x",true);
</syntaxhighlight>
When application is run it will print out
<syntaxhighlight lang="java">
Parsed tokens:
------------
VarMATRIX
ASSIGN
VarMATRIX
MINUS
VarMATRIX
TIMES
VarMATRIX
Operations:
------------
multiply-mm
subtract-mm
copy-mm
</syntaxhighlight>
= Alias =
To manipulate matrices in equations they need to be aliased. Both DenseMatrix64F and SimpleMatrix can be aliased. A copy of scalar numbers can also be aliased. When a variable is aliased a reference to the data is saved and a name associated with it.
<syntaxhighlight lang="java">
DenseMatrix64F x = new DenseMatrix64F(6,1);
eq.alias(x,"x");
</syntaxhighlight>
Multiple variables can be aliased at the same time too
<syntaxhighlight lang="java">
eq.alias(x,"x",P,"P",h,"Happy");
</syntaxhighlight>
As is shown above the string name for a variable does not have to be the same as Java name of the variable. Here is an example where an integer and double is aliased.
<syntaxhighlight lang="java">
int a = 6;
eq.alias(2.3,"distance",a,"a");
</syntaxhighlight>
After a variable has been aliased you can alias the same name again to change it. Here is an example of just that:
<syntaxhighlight lang="java">
for( int i = 0; i < 10; i++ ) {
eq.alias(i,"i");
// do stuff with i
}
</syntaxhighlight>
If after benchmarking your code and discover that the alias operation is slowing it down (a hashmap lookup is done internally) then you should consider the following faster, but uglier, alternative.
<syntaxhighlight lang="java">
VariableInteger i = eq.lookupVariable("i");
for( i.value = 0; i.value < 10; i.value++ ) {
// do stuff with i
}
</syntaxhighlight>
= Submatrices =
Sub-matrices can be read from and written to. It's easy to reference a sub-matrix inside of any matrix. A few examples are below.
<syntaxhighlight lang="java">
A(1:4,0:5)
</syntaxhighlight>
Here rows 1 to 4 (inclusive) and columns 0 to 5 (inclusive) compose the sub-matrix of A. The notation "a:b" indicates an integer set from 'a' to 'b', where 'a' and 'b' must be integers themselves. To specify every row or column do ":" or all rows and columns past a certain 'a' can be referenced with "a:". Finally, you can reference just a number by typeing it, e.g. "a".
<syntaxhighlight lang="java">
A(3:,3) <-- Rows from 3 to the last row and just column 3
A(:,:) <-- Every element in A
A(1,2) <-- The element in A at row=1,col=2
</syntaxhighlight>
The last example is a special case in that A(1,2) will return a double and not 1x1 matrix. Consider the following:
<syntaxhighlight lang="java">
A(0:2,0:2) = C/B(1,2)
</syntaxhighlight>
The results of dividing the elements of matrix C by the value of B(1,2) is assigned to the submatrix in A.
A named variable can also be used to reference elements as long as it's an integer.
<syntaxhighlight lang="java">
a = A(i,j)
</syntaxhighlight>
= Inline Matrix =
Matrices can be created inline and are defined inside of brackets. The matrix is specified in a row-major format, where a space separates elements in a row and a semi-colon indicates the end of a row.
<syntaxhighlight lang="java">
[5 0 0;0 4.0 0.0 ; 0 0 1]
</syntaxhighlight>
Defines a 3x3 matrix with 5,4,1 for it's diagonal elements. Visually this looks like:
<syntaxhighlight lang="java">
[ 5 0 0 ]
[ 0 4 0 ]
[ 0 0 1 ]
</syntaxhighlight>
An inline matrix can be used to concatenate other matrices together.
<syntaxhighlight lang="java">
[ A ; B ; C ]
</syntaxhighlight>
Will concatenate matrices A, B, and C along their rows. They must have the same number of columns. As you might guess, to concatenate along columns you would
<syntaxhighlight lang="java">
[ A B C ]
</syntaxhighlight>
and each matrix must have the same number of rows. Inner matrices are also allowed
<syntaxhighlight lang="java">
[ [1 2;2 3] [4;5] ; A ]
</syntaxhighlight>
which will result in
<syntaxhighlight lang="java">
[ 1 2 4 ]
[ 2 3 5 ]
[ A ]
</syntaxhighlight>
= Built in Functions and Variables =
'''Constants'''
<pre>
pi = Math.PI
e = Math.E
</pre>
'''Functions'''
<pre>
eye(N) Create an identity matrix which is N by N.
eye(A) Create an identity matrix which is A.numRows by A.numCols
normF(A) Frobenius normal of the matrix.
det(A) Determinant of the matrix
inv(A) Inverse of a matrix
pinv(A) Pseudo-inverse of a matrix
rref(A) Reduced row echelon form of A
trace(A) Trace of the matrix
zeros(r,c) Matrix full of zeros with r rows and c columns.
ones(r,c) Matrix full of ones with r rows and c columns.
diag(A) If a vector then returns a square matrix with diagonal elements filled with vector
diag(A) If a matrix then it returns the diagonal elements as a column vector
dot(A,B) Returns the dot product of two vectors as a double. Does not work on general matrices.
solve(A,B) Returns the solution X from A*X = B.
kron(A,B) Kronecker product
abs(A) Absolute value of A.
max(A) Element with the largest value in A.
min(A) Element with the smallest value in A.
pow(a,b) Scalar power of a to b. Can also be invoked with "a^b".
sin(a) Math.sin(a) for scalars only
cos(a) Math.cos(a) for scalars only
atan(a) Math.atan(a) for scalars only
atan2(a,b) Math.atan2(a,b) for scalars only
exp(a) Math.exp(a) for scalars and element-wise matrices
log(a) Math.log(a) for scalars and element-wise matrices
</pre>
'''Symbols'''
<pre>
'*' multiplication (Matrix-Matrix, Scalar-Matrix, Scalar-Scalar)
'+' addition (Matrix-Matrix, Scalar-Matrix, Scalar-Scalar)
'-' subtraction (Matrix-Matrix, Scalar-Matrix, Scalar-Scalar)
'/' divide (Matrix-Scalar, Scalar-Scalar)
'/' matrix solve "x=b/A" is equivalent to x=solve(A,b) (Matrix-Matrix)
'^' Scalar power. a^b is a to the power of b.
'\' left-divide. Same as divide but reversed. e.g. x=A\b is x=solve(A,b)
'.*' element-wise multiplication (Matrix-Matrix)
'./' element-wise division (Matrix-Matrix)
'.^' element-wise power. (scalar-scalar) (matrix-matrix) (scalar-matrix) (matrix-scalar)
'^' Scalar power. a^b is a to the power of b.
''' matrix transpose
'=' assignment by value (Matrix-Matrix, Scalar-Scalar)
</pre>
= User Defined Functions =
It's easy to add your own custom functions too. A custom function implements ManagerFunctions.Input1 or ManagerFunctions.InputN, depending on the number of inputs it takes. It is then added to the ManagerFunctions in Equation by call add(). The output matrix should also be resized.
[[Example Customizing Equations]]
ac7bc2ce70e2059d91552c63378e3e4ae86b120b
Manual
0
8
53
47
2015-03-22T06:11:30Z
Peter
1
wikitext
text/x-wiki
= The Basics =
What exactly is Efficient Java Matrix Library (EJML)? EJML is a Java library for performing standard linear algebra operations on dense matrices. Typically the list of standard operations is divided up unto basic (addition, subtraction, multiplication, ...etc), decompositions (LU, QR, SVD, ... etc), and solving linear systems. A complete list of its core functionality can be found on the [[Capabilities]] page.
This manual describes how to use and develop an application using EJML. Other questions, like how to build or include it in your project, is provided in the list below. If you have a question which isn't answered or is confusion feel free to post a question on the message board! Instructions on how to use EJML is primarily done in this manual through example, see below. The examples are selected from common real-world problems, such as Kalman filters. Some times the same example is provided in three different formats using one of the three interfaces provided in EJML to help you understand the differences.
* [[Download|Download and Building]]
* [[Frequently Asked Questions|Frequently Asked Questions]]
* [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
EJML is compatible with Java 1.6 and beyond.
== The Interfaces ==
A primary design goal of EJML was to provide users the capability to write both highly optimized code and easy to read/write code. Since it's hard to do this with a single API BoofCV provides three different ways to interact with it.
* [[Procedural]]: You have full access to all of EJML's capabilities, can select individual algorithms, and almost complete control over memory. The downside is it feels a bit like you're programming in assembly and it's tedious to have that much control over memory.
* [[SimpleMatrix]]: An object oriented API that allows you to connect multiple operations together using a flow strategy, which is much easier to read and write. Limited subset of operations are supported and memory is constantly created and destroyed.
* [[Equations]]: Is a symbolic interface that allows you to manipulate matrices in a similar manor to Matlab/Octave. Can be precompiled and won't declare new memory if the input size doesn't change. It's a bit of a black box and the compiler isn't smart enough to pick the most efficient functions.
Example of compute the Kalman gain "K"
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
It's hard to say which interface is the best. If you are dealing with small matrices and need to write highly optimized code then ''Procedural'' is the way to go. For large matrices it doesn't really matter which one you use since the overhead is insignificant compared to the matrix operations. If you want to write something quickly then [[SimpleMatrix]] or [[Equations]] is the way to go. For those of you who are concerned about performance, I recommend coding it up first using SimpleMatrix or Equations then benchmarking to see if that code is a bottleneck. Much easier to debug that way.
[[Performance|Comparison of Interface Runtime Performance]]
== Matrix Types ==
EJML provides support for the following matrix types:
* Dense Real
* Dense Complex
* Fixed Sized Real
= Tutorials =
* [[Matlab to EJML|Matlab to EJML]]
* [[Complex Math|Complex Math]]
* [[Solving Linear Systems|Solving Linear Systems]]
* [[Matrix Decompositions|Matrix Decompositions]]
* [[Fixed Sized Matrices]]
* [[Random matrices, Matrix Features, and Matrix Norms]]
* [[Extract and Insert|Extracting and Inserting submatrices and vectors]]
* [[Input and Output|Matrix Input/Output]]
* [[Unit Testing]]
= Example Code =
The follow are code examples of common linear algebra problems intended to demonstrate different parts of EJML. In the table below it indicates which interface or interfaces the example uses.
{| class="wikitable" border="1" |
! Name !! Procedural !! SimpleMatrix !! Equations
|-
| [[Example Kalman Filter|Kalman Filter]] || X || X || X
|-
| [[Example Levenberg-Marquardt|Levenberg-Marquardt]] || X || ||
|-
| [[Example Principal Component Analysis|Principal Component Analysis]] || X || ||
|-
| [[Example Polynomial Fitting|Polynomial Fitting]] || X || ||
|-
| [[Example Polynomial Roots|Polynomial Roots]] || X || ||
|-
| [[Example Customizing Equations|Customizing Equations]] || || X ||
|-
| [[Example Customizing SimpleMatrix|Customizing SimpleMatrix]] || || || X
|-
| [[Example Fixed Sized Matrices|Fixed Sized Matrices]] || X || ||
|}
= External References =
Want to learn more about how EJML works to write more effective code and employ more advanced techniques? Understand where EJML's logo comes from? The following books are recommended reading and made EJML's early development possible.
* Best introduction to the subject that balances clarity without sacrificing important implementation details:
** [http://www.amazon.com/gp/product/0470528338/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0470528338 Fundamentals of Matrix Computations by David S. Watkins]
* Classic reference book that tersely covers hundreds of algorithms
** [http://www.amazon.com/gp/product/0801854148/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0801854148 Matrix Computations by G. Golub and C. Van Loan]
* Popular book on linear algebra
** [http://www.amazon.com/gp/product/0030105676/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0030105676 Linear Algebra and Its Applications by Gilbert Strang]
Purchasing through these links will help EJML's developer buy high end ramen noodles.
135cb3480274ac2d18439cc7b36ae96c69d83c8b
54
53
2015-03-22T06:11:50Z
Peter
1
wikitext
text/x-wiki
= The Basics =
What exactly is Efficient Java Matrix Library (EJML)? EJML is a Java library for performing standard linear algebra operations on dense matrices. Typically the list of standard operations is divided up unto basic (addition, subtraction, multiplication, ...etc), decompositions (LU, QR, SVD, ... etc), and solving linear systems. A complete list of its core functionality can be found on the [[Capabilities]] page.
This manual describes how to use and develop an application using EJML. Other questions, like how to build or include it in your project, is provided in the list below. If you have a question which isn't answered or is confusion feel free to post a question on the message board! Instructions on how to use EJML is primarily done in this manual through example, see below. The examples are selected from common real-world problems, such as Kalman filters. Some times the same example is provided in three different formats using one of the three interfaces provided in EJML to help you understand the differences.
* [[Download|Download and Building]]
* [[Frequently Asked Questions|Frequently Asked Questions]]
* [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
EJML is compatible with Java 1.6 and beyond.
== The Interfaces ==
A primary design goal of EJML was to provide users the capability to write both highly optimized code and easy to read/write code. Since it's hard to do this with a single API BoofCV provides three different ways to interact with it.
* [[Procedural]]: You have full access to all of EJML's capabilities, can select individual algorithms, and almost complete control over memory. The downside is it feels a bit like you're programming in assembly and it's tedious to have that much control over memory.
* [[SimpleMatrix]]: An object oriented API that allows you to connect multiple operations together using a flow strategy, which is much easier to read and write. Limited subset of operations are supported and memory is constantly created and destroyed.
* [[Equations]]: Is a symbolic interface that allows you to manipulate matrices in a similar manor to Matlab/Octave. Can be precompiled and won't declare new memory if the input size doesn't change. It's a bit of a black box and the compiler isn't smart enough to pick the most efficient functions.
Example of compute the Kalman gain "K"
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
It's hard to say which interface is the best. If you are dealing with small matrices and need to write highly optimized code then ''Procedural'' is the way to go. For large matrices it doesn't really matter which one you use since the overhead is insignificant compared to the matrix operations. If you want to write something quickly then [[SimpleMatrix]] or [[Equations]] is the way to go. For those of you who are concerned about performance, I recommend coding it up first using SimpleMatrix or Equations then benchmarking to see if that code is a bottleneck. Much easier to debug that way.
[[Performance|Comparison of Interface Runtime Performance]]
== Matrix Types ==
EJML provides support for the following matrix types:
* Dense Real
* Dense Complex
* Fixed Sized Real
= Tutorials =
* [[Matlab to EJML|Matlab to EJML]]
* [[Complex Math|Complex Math]]
* [[Solving Linear Systems|Solving Linear Systems]]
* [[Matrix Decompositions|Matrix Decompositions]]
* [[Fixed Sized Matrices]]
* [[Random matrices, Matrix Features, and Matrix Norms]]
* [[Extract and Insert|Extracting and Inserting submatrices and vectors]]
* [[Input and Output|Matrix Input/Output]]
* [[Unit Testing]]
= Example Code =
The follow are code examples of common linear algebra problems intended to demonstrate different parts of EJML. In the table below it indicates which interface or interfaces the example uses.
{| class="wikitable" border="1" |
! Name !! Procedural !! SimpleMatrix !! Equations
|-
| [[Example Kalman Filter|Kalman Filter]] || X || X || X
|-
| [[Example Levenberg-Marquardt|Levenberg-Marquardt]] || X || ||
|-
| [[Example Principal Component Analysis|Principal Component Analysis]] || X || ||
|-
| [[Example Polynomial Fitting|Polynomial Fitting]] || X || ||
|-
| [[Example Polynomial Roots|Polynomial Roots]] || X || ||
|-
| [[Example Customizing Equations|Customizing Equations]] || || || X
|-
| [[Example Customizing SimpleMatrix|Customizing SimpleMatrix]] || || X ||
|-
| [[Example Fixed Sized Matrices|Fixed Sized Matrices]] || X || ||
|}
= External References =
Want to learn more about how EJML works to write more effective code and employ more advanced techniques? Understand where EJML's logo comes from? The following books are recommended reading and made EJML's early development possible.
* Best introduction to the subject that balances clarity without sacrificing important implementation details:
** [http://www.amazon.com/gp/product/0470528338/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0470528338 Fundamentals of Matrix Computations by David S. Watkins]
* Classic reference book that tersely covers hundreds of algorithms
** [http://www.amazon.com/gp/product/0801854148/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0801854148 Matrix Computations by G. Golub and C. Van Loan]
* Popular book on linear algebra
** [http://www.amazon.com/gp/product/0030105676/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0030105676 Linear Algebra and Its Applications by Gilbert Strang]
Purchasing through these links will help EJML's developer buy high end ramen noodles.
a1755db3ad8fc8f683088e227714e01e51315693
62
54
2015-03-22T16:31:31Z
Peter
1
wikitext
text/x-wiki
= The Basics =
What exactly is Efficient Java Matrix Library (EJML)? EJML is a Java library for performing standard linear algebra operations on dense matrices. Typically the list of standard operations is divided up unto basic (addition, subtraction, multiplication, ...etc), decompositions (LU, QR, SVD, ... etc), and solving linear systems. A complete list of its core functionality can be found on the [[Capabilities]] page.
This manual describes how to use and develop an application using EJML. Other questions, like how to build or include it in your project, is provided in the list below. If you have a question which isn't answered or is confusion feel free to post a question on the message board! Instructions on how to use EJML is primarily done in this manual through example, see below. The examples are selected from common real-world problems, such as Kalman filters. Some times the same example is provided in three different formats using one of the three interfaces provided in EJML to help you understand the differences.
* [[Download|Download and Building]]
* [[Frequently Asked Questions|Frequently Asked Questions]]
* [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
EJML is compatible with Java 1.6 and beyond.
== The Interfaces ==
A primary design goal of EJML was to provide users the capability to write both highly optimized code and easy to read/write code. Since it's hard to do this with a single API BoofCV provides three different ways to interact with it.
* [[Procedural]]: You have full access to all of EJML's capabilities, can select individual algorithms, and almost complete control over memory. The downside is it feels a bit like you're programming in assembly and it's tedious to have that much control over memory.
* [[SimpleMatrix]]: An object oriented API that allows you to connect multiple operations together using a flow strategy, which is much easier to read and write. Limited subset of operations are supported and memory is constantly created and destroyed.
* [[Equations]]: Is a symbolic interface that allows you to manipulate matrices in a similar manor to Matlab/Octave. Can be precompiled and won't declare new memory if the input size doesn't change. It's a bit of a black box and the compiler isn't smart enough to pick the most efficient functions.
Example of compute the Kalman gain "K"
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
It's hard to say which interface is the best. If you are dealing with small matrices and need to write highly optimized code then ''Procedural'' is the way to go. For large matrices it doesn't really matter which one you use since the overhead is insignificant compared to the matrix operations. If you want to write something quickly then [[SimpleMatrix]] or [[Equations]] is the way to go. For those of you who are concerned about performance, I recommend coding it up first using SimpleMatrix or Equations then benchmarking to see if that code is a bottleneck. Much easier to debug that way.
[[Performance|Comparison of Interface Runtime Performance]]
== Matrix Types ==
EJML provides support for the following matrix types:
* Dense Real
* Dense Complex
* Fixed Sized Real
= Tutorials =
* [[Matlab to EJML|Matlab to EJML]]
* [[Complex Math|Complex Math]]
* [[Solving Linear Systems|Solving Linear Systems]]
* [[Matrix Decompositions|Matrix Decompositions]]
* [[Fixed Sized Matrices]]
* [[Random matrices, Matrix Features, and Matrix Norms]]
* [[Extract and Insert|Extracting and Inserting submatrices and vectors]]
* [[Input and Output|Matrix Input/Output]]
* [[Unit Testing]]
= Example Code =
The follow are code examples of common linear algebra problems intended to demonstrate different parts of EJML. In the table below it indicates which interface or interfaces the example uses.
{| class="wikitable" border="1" |
! Name !! Procedural !! SimpleMatrix !! Equations
|-
| [[Example Kalman Filter|Kalman Filter]] || X || X || X
|-
| [[Example Levenberg-Marquardt|Levenberg-Marquardt]] || X || ||
|-
| [[Example Principal Component Analysis|Principal Component Analysis]] || X || ||
|-
| [[Example Polynomial Fitting|Polynomial Fitting]] || X || ||
|-
| [[Example Polynomial Roots|Polynomial Roots]] || X || ||
|-
| [[Example Customizing Equations|Customizing Equations]] || || || X
|-
| [[Example Customizing SimpleMatrix|Customizing SimpleMatrix]] || || X ||
|-
| [[Example Fixed Sized Matrices|Fixed Sized Matrices]] || X || ||
|-
| [[Example Complex Math|Complex Math]] || X || ||
|-
| [[Example Complex Matrices|Complex Matrices]] || X || ||
|}
= External References =
Want to learn more about how EJML works to write more effective code and employ more advanced techniques? Understand where EJML's logo comes from? The following books are recommended reading and made EJML's early development possible.
* Best introduction to the subject that balances clarity without sacrificing important implementation details:
** [http://www.amazon.com/gp/product/0470528338/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0470528338 Fundamentals of Matrix Computations by David S. Watkins]
* Classic reference book that tersely covers hundreds of algorithms
** [http://www.amazon.com/gp/product/0801854148/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0801854148 Matrix Computations by G. Golub and C. Van Loan]
* Popular book on linear algebra
** [http://www.amazon.com/gp/product/0030105676/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0030105676 Linear Algebra and Its Applications by Gilbert Strang]
Purchasing through these links will help EJML's developer buy high end ramen noodles.
bbf2c3202671339f46494ff145f4d5679f097e96
65
62
2015-03-22T16:38:35Z
Peter
1
wikitext
text/x-wiki
= The Basics =
What exactly is Efficient Java Matrix Library (EJML)? EJML is a Java library for performing standard linear algebra operations on dense matrices. Typically the list of standard operations is divided up unto basic (addition, subtraction, multiplication, ...etc), decompositions (LU, QR, SVD, ... etc), and solving linear systems. A complete list of its core functionality can be found on the [[Capabilities]] page.
This manual describes how to use and develop an application using EJML. Other questions, like how to build or include it in your project, is provided in the list below. If you have a question which isn't answered or is confusion feel free to post a question on the message board! Instructions on how to use EJML is primarily done in this manual through example, see below. The examples are selected from common real-world problems, such as Kalman filters. Some times the same example is provided in three different formats using one of the three interfaces provided in EJML to help you understand the differences.
* [[Download|Download and Building]]
* [[Frequently Asked Questions|Frequently Asked Questions]]
* [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
EJML is compatible with Java 1.6 and beyond.
== The Interfaces ==
A primary design goal of EJML was to provide users the capability to write both highly optimized code and easy to read/write code. Since it's hard to do this with a single API BoofCV provides three different ways to interact with it.
* [[Procedural]]: You have full access to all of EJML's capabilities, can select individual algorithms, and almost complete control over memory. The downside is it feels a bit like you're programming in assembly and it's tedious to have that much control over memory.
* [[SimpleMatrix]]: An object oriented API that allows you to connect multiple operations together using a flow strategy, which is much easier to read and write. Limited subset of operations are supported and memory is constantly created and destroyed.
* [[Equations]]: Is a symbolic interface that allows you to manipulate matrices in a similar manor to Matlab/Octave. Can be precompiled and won't declare new memory if the input size doesn't change. It's a bit of a black box and the compiler isn't smart enough to pick the most efficient functions.
Example of compute the Kalman gain "K"
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
It's hard to say which interface is the best. If you are dealing with small matrices and need to write highly optimized code then ''Procedural'' is the way to go. For large matrices it doesn't really matter which one you use since the overhead is insignificant compared to the matrix operations. If you want to write something quickly then [[SimpleMatrix]] or [[Equations]] is the way to go. For those of you who are concerned about performance, I recommend coding it up first using SimpleMatrix or Equations then benchmarking to see if that code is a bottleneck. Much easier to debug that way.
[[Performance|Comparison of Interface Runtime Performance]]
== Matrix Types ==
EJML provides support for the following matrix types:
* Dense Real
* Dense Complex
* Fixed Sized Real
= Tutorials =
* [[Matlab to EJML|Matlab to EJML]]
* [[Tutorial Complex|Complex]]
* [[Solving Linear Systems|Solving Linear Systems]]
* [[Matrix Decompositions|Matrix Decompositions]]
* [[Fixed Sized Matrices]]
* [[Random matrices, Matrix Features, and Matrix Norms]]
* [[Extract and Insert|Extracting and Inserting submatrices and vectors]]
* [[Input and Output|Matrix Input/Output]]
* [[Unit Testing]]
= Example Code =
The follow are code examples of common linear algebra problems intended to demonstrate different parts of EJML. In the table below it indicates which interface or interfaces the example uses.
{| class="wikitable" border="1" |
! Name !! Procedural !! SimpleMatrix !! Equations
|-
| [[Example Kalman Filter|Kalman Filter]] || X || X || X
|-
| [[Example Levenberg-Marquardt|Levenberg-Marquardt]] || X || ||
|-
| [[Example Principal Component Analysis|Principal Component Analysis]] || X || ||
|-
| [[Example Polynomial Fitting|Polynomial Fitting]] || X || ||
|-
| [[Example Polynomial Roots|Polynomial Roots]] || X || ||
|-
| [[Example Customizing Equations|Customizing Equations]] || || || X
|-
| [[Example Customizing SimpleMatrix|Customizing SimpleMatrix]] || || X ||
|-
| [[Example Fixed Sized Matrices|Fixed Sized Matrices]] || X || ||
|-
| [[Example Complex Math|Complex Math]] || X || ||
|-
| [[Example Complex Matrices|Complex Matrices]] || X || ||
|}
= External References =
Want to learn more about how EJML works to write more effective code and employ more advanced techniques? Understand where EJML's logo comes from? The following books are recommended reading and made EJML's early development possible.
* Best introduction to the subject that balances clarity without sacrificing important implementation details:
** [http://www.amazon.com/gp/product/0470528338/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0470528338 Fundamentals of Matrix Computations by David S. Watkins]
* Classic reference book that tersely covers hundreds of algorithms
** [http://www.amazon.com/gp/product/0801854148/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0801854148 Matrix Computations by G. Golub and C. Van Loan]
* Popular book on linear algebra
** [http://www.amazon.com/gp/product/0030105676/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0030105676 Linear Algebra and Its Applications by Gilbert Strang]
Purchasing through these links will help EJML's developer buy high end ramen noodles.
748fd20da4f786c4cd1ff3e719c0669e4f7d9821
71
65
2015-03-23T14:42:48Z
Peter
1
wikitext
text/x-wiki
= The Basics =
What exactly is Efficient Java Matrix Library (EJML)? EJML is a Java library for performing standard linear algebra operations on dense matrices. Typically the list of standard operations is divided up unto basic (addition, subtraction, multiplication, ...etc), decompositions (LU, QR, SVD, ... etc), and solving linear systems. A complete list of its core functionality can be found on the [[Capabilities]] page.
This manual describes how to use and develop an application using EJML. Other questions, like how to build or include it in your project, is provided in the list below. If you have a question which isn't answered or is confusion feel free to post a question on the message board! Instructions on how to use EJML is primarily done in this manual through example, see below. The examples are selected from common real-world problems, such as Kalman filters. Some times the same example is provided in three different formats using one of the three interfaces provided in EJML to help you understand the differences.
* [[Download|Download and Building]]
* [[Frequently Asked Questions|Frequently Asked Questions]]
* [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
EJML is compatible with Java 1.6 and beyond.
== The Interfaces ==
A primary design goal of EJML was to provide users the capability to write both highly optimized code and easy to read/write code. Since it's hard to do this with a single API BoofCV provides three different ways to interact with it.
* [[Procedural]]: You have full access to all of EJML's capabilities, can select individual algorithms, and almost complete control over memory. The downside is it feels a bit like you're programming in assembly and it's tedious to have that much control over memory.
* [[SimpleMatrix]]: An object oriented API that allows you to connect multiple operations together using a flow strategy, which is much easier to read and write. Limited subset of operations are supported and memory is constantly created and destroyed.
* [[Equations]]: Is a symbolic interface that allows you to manipulate matrices in a similar manor to Matlab/Octave. Can be precompiled and won't declare new memory if the input size doesn't change. It's a bit of a black box and the compiler isn't smart enough to pick the most efficient functions.
Example of compute the Kalman gain "K"
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
It's hard to say which interface is the best. If you are dealing with small matrices and need to write highly optimized code then ''Procedural'' is the way to go. For large matrices it doesn't really matter which one you use since the overhead is insignificant compared to the matrix operations. If you want to write something quickly then [[SimpleMatrix]] or [[Equations]] is the way to go. For those of you who are concerned about performance, I recommend coding it up first using SimpleMatrix or Equations then benchmarking to see if that code is a bottleneck. Much easier to debug that way.
[[Performance|Comparison of Interface Runtime Performance]]
== Matrix Types ==
EJML provides support for the following matrix types:
* Dense Real
* Dense Complex
* Fixed Sized Real
= Tutorials =
* [[Matlab to EJML|Matlab to EJML]]
* [[Tutorial Complex|Complex]]
* [[Solving Linear Systems|Solving Linear Systems]]
* [[Matrix Decompositions|Matrix Decompositions]]
* [[Random matrices, Matrix Features, and Matrix Norms]]
* [[Extract and Insert|Extracting and Inserting submatrices and vectors]]
* [[Input and Output|Matrix Input/Output]]
* [[Unit Testing]]
= Example Code =
The follow are code examples of common linear algebra problems intended to demonstrate different parts of EJML. In the table below it indicates which interface or interfaces the example uses.
{| class="wikitable" border="1" |
! Name !! Procedural !! SimpleMatrix !! Equations
|-
| [[Example Kalman Filter|Kalman Filter]] || X || X || X
|-
| [[Example Levenberg-Marquardt|Levenberg-Marquardt]] || X || ||
|-
| [[Example Principal Component Analysis|Principal Component Analysis]] || X || ||
|-
| [[Example Polynomial Fitting|Polynomial Fitting]] || X || ||
|-
| [[Example Polynomial Roots|Polynomial Roots]] || X || ||
|-
| [[Example Customizing Equations|Customizing Equations]] || || || X
|-
| [[Example Customizing SimpleMatrix|Customizing SimpleMatrix]] || || X ||
|-
| [[Example Fixed Sized Matrices|Fixed Sized Matrices]] || X || ||
|-
| [[Example Complex Math|Complex Math]] || X || ||
|-
| [[Example Complex Matrices|Complex Matrices]] || X || ||
|}
= External References =
Want to learn more about how EJML works to write more effective code and employ more advanced techniques? Understand where EJML's logo comes from? The following books are recommended reading and made EJML's early development possible.
* Best introduction to the subject that balances clarity without sacrificing important implementation details:
** [http://www.amazon.com/gp/product/0470528338/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0470528338 Fundamentals of Matrix Computations by David S. Watkins]
* Classic reference book that tersely covers hundreds of algorithms
** [http://www.amazon.com/gp/product/0801854148/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0801854148 Matrix Computations by G. Golub and C. Van Loan]
* Popular book on linear algebra
** [http://www.amazon.com/gp/product/0030105676/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0030105676 Linear Algebra and Its Applications by Gilbert Strang]
Purchasing through these links will help EJML's developer buy high end ramen noodles.
7051f1ebbcfbbfdd2a1bad16b2d41404653ede2e
73
71
2015-03-23T14:59:21Z
Peter
1
wikitext
text/x-wiki
= The Basics =
What exactly is Efficient Java Matrix Library (EJML)? EJML is a Java library for performing standard linear algebra operations on dense matrices. Typically the list of standard operations is divided up unto basic (addition, subtraction, multiplication, ...etc), decompositions (LU, QR, SVD, ... etc), and solving linear systems. A complete list of its core functionality can be found on the [[Capabilities]] page.
This manual describes how to use and develop an application using EJML. Other questions, like how to build or include it in your project, is provided in the list below. If you have a question which isn't answered or is confusion feel free to post a question on the message board! Instructions on how to use EJML is primarily done in this manual through example, see below. The examples are selected from common real-world problems, such as Kalman filters. Some times the same example is provided in three different formats using one of the three interfaces provided in EJML to help you understand the differences.
* [[Download|Download and Building]]
* [[Frequently Asked Questions|Frequently Asked Questions]]
* [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
EJML is compatible with Java 1.6 and beyond.
== The Interfaces ==
A primary design goal of EJML was to provide users the capability to write both highly optimized code and easy to read/write code. Since it's hard to do this with a single API BoofCV provides three different ways to interact with it.
* [[Procedural]]: You have full access to all of EJML's capabilities, can select individual algorithms, and almost complete control over memory. The downside is it feels a bit like you're programming in assembly and it's tedious to have that much control over memory.
* [[SimpleMatrix]]: An object oriented API that allows you to connect multiple operations together using a flow strategy, which is much easier to read and write. Limited subset of operations are supported and memory is constantly created and destroyed.
* [[Equations]]: Is a symbolic interface that allows you to manipulate matrices in a similar manor to Matlab/Octave. Can be precompiled and won't declare new memory if the input size doesn't change. It's a bit of a black box and the compiler isn't smart enough to pick the most efficient functions.
Example of compute the Kalman gain "K"
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
It's hard to say which interface is the best. If you are dealing with small matrices and need to write highly optimized code then ''Procedural'' is the way to go. For large matrices it doesn't really matter which one you use since the overhead is insignificant compared to the matrix operations. If you want to write something quickly then [[SimpleMatrix]] or [[Equations]] is the way to go. For those of you who are concerned about performance, I recommend coding it up first using SimpleMatrix or Equations then benchmarking to see if that code is a bottleneck. Much easier to debug that way.
[[Performance|Comparison of Interface Runtime Performance]]
== Matrix Types ==
EJML provides support for the following matrix types:
* [http://ejml.org/javadoc/org/ejml/data/DenseMatrix64F.html Dense Real Float 64]
* [http://ejml.org/javadoc/org/ejml/data/CDenseMatrix64F.html Dense Complex Float 64]
* [[Example Fixed Sized Matrices|Fixed Sized Real Float 64]]
Float 64 refers to it using 64bit floating point numbers otherwise known as a double in Java.
= Tutorials =
* [[Matlab to EJML|Matlab to EJML]]
* [[Tutorial Complex|Complex]]
* [[Solving Linear Systems|Solving Linear Systems]]
* [[Matrix Decompositions|Matrix Decompositions]]
* [[Random matrices, Matrix Features, and Matrix Norms]]
* [[Extract and Insert|Extracting and Inserting submatrices and vectors]]
* [[Input and Output|Matrix Input/Output]]
* [[Unit Testing]]
= Example Code =
The follow are code examples of common linear algebra problems intended to demonstrate different parts of EJML. In the table below it indicates which interface or interfaces the example uses.
{| class="wikitable" border="1" |
! Name !! Procedural !! SimpleMatrix !! Equations
|-
| [[Example Kalman Filter|Kalman Filter]] || X || X || X
|-
| [[Example Levenberg-Marquardt|Levenberg-Marquardt]] || X || ||
|-
| [[Example Principal Component Analysis|Principal Component Analysis]] || X || ||
|-
| [[Example Polynomial Fitting|Polynomial Fitting]] || X || ||
|-
| [[Example Polynomial Roots|Polynomial Roots]] || X || ||
|-
| [[Example Customizing Equations|Customizing Equations]] || || || X
|-
| [[Example Customizing SimpleMatrix|Customizing SimpleMatrix]] || || X ||
|-
| [[Example Fixed Sized Matrices|Fixed Sized Matrices]] || X || ||
|-
| [[Example Complex Math|Complex Math]] || X || ||
|-
| [[Example Complex Matrices|Complex Matrices]] || X || ||
|}
= External References =
Want to learn more about how EJML works to write more effective code and employ more advanced techniques? Understand where EJML's logo comes from? The following books are recommended reading and made EJML's early development possible.
* Best introduction to the subject that balances clarity without sacrificing important implementation details:
** [http://www.amazon.com/gp/product/0470528338/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0470528338 Fundamentals of Matrix Computations by David S. Watkins]
* Classic reference book that tersely covers hundreds of algorithms
** [http://www.amazon.com/gp/product/0801854148/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0801854148 Matrix Computations by G. Golub and C. Van Loan]
* Popular book on linear algebra
** [http://www.amazon.com/gp/product/0030105676/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0030105676 Linear Algebra and Its Applications by Gilbert Strang]
Purchasing through these links will help EJML's developer buy high end ramen noodles.
7ecdc20537d5ff8b3e54ea50815043e60d5730d9
81
73
2015-03-25T15:07:14Z
Peter
1
wikitext
text/x-wiki
= The Basics =
What exactly is Efficient Java Matrix Library (EJML)? EJML is a Java library for performing standard linear algebra operations on dense matrices. Typically the list of standard operations is divided up unto basic (addition, subtraction, multiplication, ...etc), decompositions (LU, QR, SVD, ... etc), and solving linear systems. A complete list of its core functionality can be found on the [[Capabilities]] page.
This manual describes how to use and develop an application using EJML. Other questions, like how to build or include it in your project, is provided in the list below. If you have a question which isn't answered or is confusion feel free to post a question on the message board! Instructions on how to use EJML is primarily done in this manual through example, see below. The examples are selected from common real-world problems, such as Kalman filters. Some times the same example is provided in three different formats using one of the three interfaces provided in EJML to help you understand the differences.
* [[Download|Download and Building]]
* [[Frequently Asked Questions|Frequently Asked Questions]]
* [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
EJML is compatible with Java 1.6 and beyond.
== The Interfaces ==
A primary design goal of EJML was to provide users the capability to write both highly optimized code and easy to read/write code. Since it's hard to do this with a single API BoofCV provides three different ways to interact with it.
* [[Procedural]]: You have full access to all of EJML's capabilities, can select individual algorithms, and almost complete control over memory. The downside is it feels a bit like you're programming in assembly and it's tedious to have that much control over memory.
* [[SimpleMatrix]]: An object oriented API that allows you to connect multiple operations together using a flow strategy, which is much easier to read and write. Limited subset of operations are supported and memory is constantly created and destroyed.
* [[Equations]]: Is a symbolic interface that allows you to manipulate matrices in a similar manor to Matlab/Octave. Can be precompiled and won't declare new memory if the input size doesn't change. It's a bit of a black box and the compiler isn't smart enough to pick the most efficient functions.
Example of compute the Kalman gain "K"
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
It's hard to say which interface is the best. If you are dealing with small matrices and need to write highly optimized code then ''Procedural'' is the way to go. For large matrices it doesn't really matter which one you use since the overhead is insignificant compared to the matrix operations. If you want to write something quickly then [[SimpleMatrix]] or [[Equations]] is the way to go. For those of you who are concerned about performance, I recommend coding it up first using SimpleMatrix or Equations then benchmarking to see if that code is a bottleneck. Much easier to debug that way.
[[Performance|Comparison of Interface Runtime Performance]]
= Tutorials =
* [[Matlab to EJML|Matlab to EJML]]
* [[Tutorial Complex|Complex]]
* [[Solving Linear Systems|Solving Linear Systems]]
* [[Matrix Decompositions|Matrix Decompositions]]
* [[Random matrices, Matrix Features, and Matrix Norms]]
* [[Extract and Insert|Extracting and Inserting submatrices and vectors]]
* [[Input and Output|Matrix Input/Output]]
* [[Unit Testing]]
= Example Code =
The follow are code examples of common linear algebra problems intended to demonstrate different parts of EJML. In the table below it indicates which interface or interfaces the example uses.
{| class="wikitable" border="1" |
! Name !! Procedural !! SimpleMatrix !! Equations
|-
| [[Example Kalman Filter|Kalman Filter]] || X || X || X
|-
| [[Example Levenberg-Marquardt|Levenberg-Marquardt]] || X || ||
|-
| [[Example Principal Component Analysis|Principal Component Analysis]] || X || ||
|-
| [[Example Polynomial Fitting|Polynomial Fitting]] || X || ||
|-
| [[Example Polynomial Roots|Polynomial Roots]] || X || ||
|-
| [[Example Customizing Equations|Customizing Equations]] || || || X
|-
| [[Example Customizing SimpleMatrix|Customizing SimpleMatrix]] || || X ||
|-
| [[Example Fixed Sized Matrices|Fixed Sized Matrices]] || X || ||
|-
| [[Example Complex Math|Complex Math]] || X || ||
|-
| [[Example Complex Matrices|Complex Matrices]] || X || ||
|}
= External References =
Want to learn more about how EJML works to write more effective code and employ more advanced techniques? Understand where EJML's logo comes from? The following books are recommended reading and made EJML's early development possible.
* Best introduction to the subject that balances clarity without sacrificing important implementation details:
** [http://www.amazon.com/gp/product/0470528338/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0470528338 Fundamentals of Matrix Computations by David S. Watkins]
* Classic reference book that tersely covers hundreds of algorithms
** [http://www.amazon.com/gp/product/0801854148/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0801854148 Matrix Computations by G. Golub and C. Van Loan]
* Popular book on linear algebra
** [http://www.amazon.com/gp/product/0030105676/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0030105676 Linear Algebra and Its Applications by Gilbert Strang]
Purchasing through these links will help EJML's developer buy high end ramen noodles.
ff6c995bf2c7961c695b3098b6e221b25f3dbde9
Example Customizing Equations
0
19
55
2015-03-22T06:16:12Z
Peter
1
Created page with "While Equations provides many of the most common functions used in Linear Algebra, there are many it does not provide. The following example demonstrates how to add your own..."
wikitext
text/x-wiki
While Equations provides many of the most common functions used in Linear Algebra, there are many it does not provide. The following example demonstrates how to add your own functions to Equations allowing you to extend its capabilities.
Code on GitHub:
* [https://github.com/lessthanoptimal/ejml/blob/master/examples/src/org/ejml/example/EquationCustomFunction.java EquationCustomFunction]
== Example ==
<syntaxhighlight lang="java">
/**
* Demonstration on how to create and use a custom function in Equation. A custom function must implement
* ManagerFunctions.Input1 or ManagerFunctions.InputN, depending on the number of inputs it takes.
*
* @author Peter Abeles
*/
public class EquationCustomFunction {
public static void main(String[] args) {
Random rand = new Random(234);
Equation eq = new Equation();
eq.getFunctions().add("multTransA",createMultTransA());
SimpleMatrix A = new SimpleMatrix(1,1); // will be resized
SimpleMatrix B = SimpleMatrix.random(3,4,-1,1,rand);
SimpleMatrix C = SimpleMatrix.random(3,4,-1,1,rand);
eq.alias(A,"A",B,"B",C,"C");
eq.process("A=multTransA(B,C)");
System.out.println("Found");
System.out.println(A);
System.out.println("Expected");
B.transpose().mult(C).print();
}
/**
* Create the function. Be sure to handle all possible input types and combinations correctly and provide
* meaningful error messages. The output matrix should be resized to fit the inputs.
*/
public static ManagerFunctions.InputN createMultTransA() {
return new ManagerFunctions.InputN() {
@Override
public Operation.Info create(List<Variable> inputs, ManagerTempVariables manager ) {
if( inputs.size() != 2 )
throw new RuntimeException("Two inputs required");
final Variable varA = inputs.get(0);
final Variable varB = inputs.get(1);
Operation.Info ret = new Operation.Info();
if( varA instanceof VariableMatrix && varB instanceof VariableMatrix ) {
// The output matrix or scalar variable must be created with the provided manager
final VariableMatrix output = manager.createMatrix();
ret.output = output;
ret.op = new Operation("multTransA-mm") {
@Override
public void process() {
DenseMatrix64F mA = ((VariableMatrix)varA).matrix;
DenseMatrix64F mB = ((VariableMatrix)varB).matrix;
output.matrix.reshape(mA.numCols,mB.numCols);
CommonOps.multTransA(mA,mB,output.matrix);
}
};
} else {
throw new IllegalArgumentException("Expected both inputs to be a matrix");
}
return ret;
}
};
}
}
</syntaxhighlight>
96b4565e17c86b8b99c5296609ec8d8e5a9694f4
Example Levenberg-Marquardt
0
12
56
50
2015-03-22T06:17:45Z
Peter
1
wikitext
text/x-wiki
Writing succinct and readable linear algebra code in Java, using any library, is problematic. Originally EJML just offered two API's for performing linear algebra. A procedural API which provided complete control over memory and which algorithms were used, but was verbose and has a sharper learning curve. Alternatively you could use an object oriented API (SimpleMatrix) but lose control over memory and it has a limited set of operators. Neither of these API's produces code which is all that similar to how equations are written mathematically.
Languages, such as Matlab, are specifically designed for processing matrices and are much closer to mathematical notation. C++ offers the ability to overload operators allowing for more natural code, see [http://eigen.tuxfamily.org Eigen]. To overcome this problem EJML now provides the _Equation_ API, which allows a Matlab/Octave like notation to be used.
This is achieved by parsing text strings with equations and converting it into a set of executable instructions, see the usage example below:
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
It is easy to see that the equations code is the compact and readable. While the syntax is heavily inspired by Matlab and its kin, it does not attempt to replicate their functionality. It is also not a replacement for SimpleMatrix or the procedural API. There are situations where those other interfaces are easier to use and most programs would need to use a mix.
Equations is designed to have minimal overhead. It runs almost as fast as the procedural API and can be used such that all memory is predeclared.
----
__TOC__
= Quick Start =
The syntax used in Equation is very similar to Matlab and other computer algebra systems (CAS). It is assumed the reader is already familiar with these systems and can quickly pick up the syntax through these examples.
Let's start with a complete simple example then explain what's going on line by line.
<pre>
01: public void updateP( DenseMatrix64F P , DenseMatrix64F F , DenseMatrix64F Q ) {
02: Equation eq = new Equation();
03: eq.alias(P,"P",F,"F",Q,"Q");
04: eq.process("S = F*P*F'");
05: eq.process("P = S + Q");
06: }
</pre>
'''Line 02:''' Declare the Equation class.<br>
'''Line 03:''' Create aliases for each variable. This allowed Equation to reference and manipulate those classes.<br>
'''Line 04:''' Process() is called and passed in a text string with an equation in it. The variable 'S' is lazily created and set to the result of F*P*F'.<br>
'''Line 05:''' Process() is called again and P is set to the result of adding S and Q together. Because P is aliased to the input matrix P that matrix is changed.
Three types of variables are supported, matrix, double, and integer. Results can be stored in each type and all can be aliased. The example below uses all 3 data types and to compute the likelihood of "x" from a multivariable normal distribution defined by matrices 'mu' and 'P'.
<syntaxhighlight lang="java">
eq.alias(x.numRows,"k",P,"P",x,"x",mu,"mu");
eq.process("p = (2*pi)^(-k/2)/sqrt(det(P))*exp(-0.5*(x-mu)'*inv(P)*(x-mu))");
</syntaxhighlight>
The end result 'p' will be a double. There was no need to alias 'pi' since it's a built in constant. Since 'p' is lazily defined how do you access the result?
<syntaxhighlight lang="java">
double p = eq.lookupDouble("p");
</syntaxhighlight>
For a matrix you could use eq.lookupMatrix() and eq.lookupInteger() for integers. If you don't know the variable's type then eq.lookupVariable() is what you need.
It is also possible to define a matrix inline:
<syntaxhighlight lang="java">
eq.process("P = [10 0 0;0 10 0;0 0 10]");
</syntaxhighlight>
Will assign P to a 3x3 matrix with 10's all along it's diagonal. Other matrices can also be included inside:
<syntaxhighlight lang="java">
eq.process("P = [A ; B]");
</syntaxhighlight>
will concatenate A and B horizontally.
Submatrices are also supported for assignment and reference.
<syntaxhighlight lang="java">
eq.process("P(2:5,0:3) = 10*A(1:4,10:13)");
</syntaxhighlight>
P(2:5,0:3) references the sub-matrix inside of P from rows 2 to 5 (inclusive) and columns 0 to 3 (inclusive).
This concludes the quick start tutorial. The remaining sections will go into more detail on each of the subjects touched upon above.
= The Compiler =
The current compiler is very basic and performs very literal translations of equations into code. For example, "A = 2.5*B*C'" could be executed with a single call to CommonOps.multTransB(). Instead it will transpose C, save the result, then scale B by 2.5, save the result, multiply the results together, save that, and finally copy the result into A. In the future the compiler will become smart enough to recognize such patterns.
Compiling the text string contains requires a bit of overhead but once compiled it can be run with very fast. When dealing with larger matrices the overhead involved is insignificant, but for smaller ones it can have a noticeable impact. This is why the ability to precompile an equation is provided.
Original:
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
Precompiled:
<syntaxhighlight lang="java">
// precompile the equation
Sequence s = eq.compile("K = P*H'*inv( H*P*H' + R )");
// execute the results with out needing to recompile
s.perform();
</syntaxhighlight>
Both are equivalent, but if an equation is invoked multiple times the precompiled version can have a noticable improvement in performance. Using precompiled sequences also means that means that internal arrays are only declared once and allows the user to control when memory is created/destroyed.
To make it clear, precompiling is only recommended when dealing with smaller matrices or when tighter control over memory is required.
When an equation is precompiled you can still change the alias for a variable.
<syntaxhighlight lang="java">
eq.alias(0,"sum",0,"i");
Sequence s = eq.compile("sum = sum + i");
for( int i = 0; i < 10; i++ ) {
eq.alias(i,"i");
s.perform();
}
</syntaxhighlight>
This will sum up the numbers from 0 to 9.
== Debugging ==
There will be times when you pass in an equation and it throws some weird exception or just doesn't do what you expected. To see the tokens and sequence of operations set the second parameter in compile() or peform() to true.
For example:
<syntaxhighlight lang="java">
eq.process("y = z - H*x",true);
</syntaxhighlight>
When application is run it will print out
<syntaxhighlight lang="java">
Parsed tokens:
------------
VarMATRIX
ASSIGN
VarMATRIX
MINUS
VarMATRIX
TIMES
VarMATRIX
Operations:
------------
multiply-mm
subtract-mm
copy-mm
</syntaxhighlight>
= Alias =
To manipulate matrices in equations they need to be aliased. Both DenseMatrix64F and SimpleMatrix can be aliased. A copy of scalar numbers can also be aliased. When a variable is aliased a reference to the data is saved and a name associated with it.
<syntaxhighlight lang="java">
DenseMatrix64F x = new DenseMatrix64F(6,1);
eq.alias(x,"x");
</syntaxhighlight>
Multiple variables can be aliased at the same time too
<syntaxhighlight lang="java">
eq.alias(x,"x",P,"P",h,"Happy");
</syntaxhighlight>
As is shown above the string name for a variable does not have to be the same as Java name of the variable. Here is an example where an integer and double is aliased.
<syntaxhighlight lang="java">
int a = 6;
eq.alias(2.3,"distance",a,"a");
</syntaxhighlight>
After a variable has been aliased you can alias the same name again to change it. Here is an example of just that:
<syntaxhighlight lang="java">
for( int i = 0; i < 10; i++ ) {
eq.alias(i,"i");
// do stuff with i
}
</syntaxhighlight>
If after benchmarking your code and discover that the alias operation is slowing it down (a hashmap lookup is done internally) then you should consider the following faster, but uglier, alternative.
<syntaxhighlight lang="java">
VariableInteger i = eq.lookupVariable("i");
for( i.value = 0; i.value < 10; i.value++ ) {
// do stuff with i
}
</syntaxhighlight>
= Submatrices =
Sub-matrices can be read from and written to. It's easy to reference a sub-matrix inside of any matrix. A few examples are below.
<syntaxhighlight lang="java">
A(1:4,0:5)
</syntaxhighlight>
Here rows 1 to 4 (inclusive) and columns 0 to 5 (inclusive) compose the sub-matrix of A. The notation "a:b" indicates an integer set from 'a' to 'b', where 'a' and 'b' must be integers themselves. To specify every row or column do ":" or all rows and columns past a certain 'a' can be referenced with "a:". Finally, you can reference just a number by typeing it, e.g. "a".
<syntaxhighlight lang="java">
A(3:,3) <-- Rows from 3 to the last row and just column 3
A(:,:) <-- Every element in A
A(1,2) <-- The element in A at row=1,col=2
</syntaxhighlight>
The last example is a special case in that A(1,2) will return a double and not 1x1 matrix. Consider the following:
<syntaxhighlight lang="java">
A(0:2,0:2) = C/B(1,2)
</syntaxhighlight>
The results of dividing the elements of matrix C by the value of B(1,2) is assigned to the submatrix in A.
A named variable can also be used to reference elements as long as it's an integer.
<syntaxhighlight lang="java">
a = A(i,j)
</syntaxhighlight>
= Inline Matrix =
Matrices can be created inline and are defined inside of brackets. The matrix is specified in a row-major format, where a space separates elements in a row and a semi-colon indicates the end of a row.
<syntaxhighlight lang="java">
[5 0 0;0 4.0 0.0 ; 0 0 1]
</syntaxhighlight>
Defines a 3x3 matrix with 5,4,1 for it's diagonal elements. Visually this looks like:
<syntaxhighlight lang="java">
[ 5 0 0 ]
[ 0 4 0 ]
[ 0 0 1 ]
</syntaxhighlight>
An inline matrix can be used to concatenate other matrices together.
<syntaxhighlight lang="java">
[ A ; B ; C ]
</syntaxhighlight>
Will concatenate matrices A, B, and C along their rows. They must have the same number of columns. As you might guess, to concatenate along columns you would
<syntaxhighlight lang="java">
[ A B C ]
</syntaxhighlight>
and each matrix must have the same number of rows. Inner matrices are also allowed
<syntaxhighlight lang="java">
[ [1 2;2 3] [4;5] ; A ]
</syntaxhighlight>
which will result in
<syntaxhighlight lang="java">
[ 1 2 4 ]
[ 2 3 5 ]
[ A ]
</syntaxhighlight>
= Built in Functions and Variables =
'''Constants'''
<pre>
pi = Math.PI
e = Math.E
</pre>
'''Functions'''
<syntaxhighlight lang="java">
eye(N) Create an identity matrix which is N by N.
eye(A) Create an identity matrix which is A.numRows by A.numCols
normF(A) Frobenius normal of the matrix.
det(A) Determinant of the matrix
inv(A) Inverse of a matrix
pinv(A) Pseudo-inverse of a matrix
rref(A) Reduced row echelon form of A
trace(A) Trace of the matrix
zeros(r,c) Matrix full of zeros with r rows and c columns.
ones(r,c) Matrix full of ones with r rows and c columns.
diag(A) If a vector then returns a square matrix with diagonal elements filled with vector
diag(A) If a matrix then it returns the diagonal elements as a column vector
dot(A,B) Returns the dot product of two vectors as a double. Does not work on general matrices.
solve(A,B) Returns the solution X from A*X = B.
kron(A,B) Kronecker product
abs(A) Absolute value of A.
max(A) Element with the largest value in A.
min(A) Element with the smallest value in A.
pow(a,b) Scalar power of a to b. Can also be invoked with "a^b".
sin(a) Math.sin(a) for scalars only
cos(a) Math.cos(a) for scalars only
atan(a) Math.atan(a) for scalars only
atan2(a,b) Math.atan2(a,b) for scalars only
exp(a) Math.exp(a) for scalars and element-wise matrices
log(a) Math.log(a) for scalars and element-wise matrices
</syntaxhighlight>
'''Symbols'''
<syntaxhighlight lang="java">
'*' multiplication (Matrix-Matrix, Scalar-Matrix, Scalar-Scalar)
'+' addition (Matrix-Matrix, Scalar-Matrix, Scalar-Scalar)
'-' subtraction (Matrix-Matrix, Scalar-Matrix, Scalar-Scalar)
'/' divide (Matrix-Scalar, Scalar-Scalar)
'/' matrix solve "x=b/A" is equivalent to x=solve(A,b) (Matrix-Matrix)
'^' Scalar power. a^b is a to the power of b.
'\' left-divide. Same as divide but reversed. e.g. x=A\b is x=solve(A,b)
'.*' element-wise multiplication (Matrix-Matrix)
'./' element-wise division (Matrix-Matrix)
'.^' element-wise power. (scalar-scalar) (matrix-matrix) (scalar-matrix) (matrix-scalar)
'^' Scalar power. a^b is a to the power of b.
''' matrix transpose
'=' assignment by value (Matrix-Matrix, Scalar-Scalar)
</syntaxhighlight>
= User Defined Functions =
It's easy to add your own custom functions too. A custom function implements ManagerFunctions.Input1 or ManagerFunctions.InputN, depending on the number of inputs it takes. It is then added to the ManagerFunctions in Equation by call add(). The output matrix should also be resized.
[[Example Customizing Equations]]
c0f1df37d5159bc859a5da42bf61e78b57ce8929
57
56
2015-03-22T06:18:31Z
Peter
1
wikitext
text/x-wiki
Writing succinct and readable linear algebra code in Java, using any library, is problematic. Originally EJML just offered two API's for performing linear algebra. A procedural API which provided complete control over memory and which algorithms were used, but was verbose and has a sharper learning curve. Alternatively you could use an object oriented API (SimpleMatrix) but lose control over memory and it has a limited set of operators. Neither of these API's produces code which is all that similar to how equations are written mathematically.
Languages, such as Matlab, are specifically designed for processing matrices and are much closer to mathematical notation. C++ offers the ability to overload operators allowing for more natural code, see [http://eigen.tuxfamily.org Eigen]. To overcome this problem EJML now provides the _Equation_ API, which allows a Matlab/Octave like notation to be used.
This is achieved by parsing text strings with equations and converting it into a set of executable instructions, see the usage example below:
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
It is easy to see that the equations code is the compact and readable. While the syntax is heavily inspired by Matlab and its kin, it does not attempt to replicate their functionality. It is also not a replacement for SimpleMatrix or the procedural API. There are situations where those other interfaces are easier to use and most programs would need to use a mix.
Equations is designed to have minimal overhead. It runs almost as fast as the procedural API and can be used such that all memory is predeclared.
----
__TOC__
= Quick Start =
The syntax used in Equation is very similar to Matlab and other computer algebra systems (CAS). It is assumed the reader is already familiar with these systems and can quickly pick up the syntax through these examples.
Let's start with a complete simple example then explain what's going on line by line.
<pre>
01: public void updateP( DenseMatrix64F P , DenseMatrix64F F , DenseMatrix64F Q ) {
02: Equation eq = new Equation();
03: eq.alias(P,"P",F,"F",Q,"Q");
04: eq.process("S = F*P*F'");
05: eq.process("P = S + Q");
06: }
</pre>
'''Line 02:''' Declare the Equation class.<br>
'''Line 03:''' Create aliases for each variable. This allowed Equation to reference and manipulate those classes.<br>
'''Line 04:''' Process() is called and passed in a text string with an equation in it. The variable 'S' is lazily created and set to the result of F*P*F'.<br>
'''Line 05:''' Process() is called again and P is set to the result of adding S and Q together. Because P is aliased to the input matrix P that matrix is changed.
Three types of variables are supported, matrix, double, and integer. Results can be stored in each type and all can be aliased. The example below uses all 3 data types and to compute the likelihood of "x" from a multivariable normal distribution defined by matrices 'mu' and 'P'.
<syntaxhighlight lang="java">
eq.alias(x.numRows,"k",P,"P",x,"x",mu,"mu");
eq.process("p = (2*pi)^(-k/2)/sqrt(det(P))*exp(-0.5*(x-mu)'*inv(P)*(x-mu))");
</syntaxhighlight>
The end result 'p' will be a double. There was no need to alias 'pi' since it's a built in constant. Since 'p' is lazily defined how do you access the result?
<syntaxhighlight lang="java">
double p = eq.lookupDouble("p");
</syntaxhighlight>
For a matrix you could use eq.lookupMatrix() and eq.lookupInteger() for integers. If you don't know the variable's type then eq.lookupVariable() is what you need.
It is also possible to define a matrix inline:
<syntaxhighlight lang="java">
eq.process("P = [10 0 0;0 10 0;0 0 10]");
</syntaxhighlight>
Will assign P to a 3x3 matrix with 10's all along it's diagonal. Other matrices can also be included inside:
<syntaxhighlight lang="java">
eq.process("P = [A ; B]");
</syntaxhighlight>
will concatenate A and B horizontally.
Submatrices are also supported for assignment and reference.
<syntaxhighlight lang="java">
eq.process("P(2:5,0:3) = 10*A(1:4,10:13)");
</syntaxhighlight>
P(2:5,0:3) references the sub-matrix inside of P from rows 2 to 5 (inclusive) and columns 0 to 3 (inclusive).
This concludes the quick start tutorial. The remaining sections will go into more detail on each of the subjects touched upon above.
= The Compiler =
The current compiler is very basic and performs very literal translations of equations into code. For example, "A = 2.5*B*C'" could be executed with a single call to CommonOps.multTransB(). Instead it will transpose C, save the result, then scale B by 2.5, save the result, multiply the results together, save that, and finally copy the result into A. In the future the compiler will become smart enough to recognize such patterns.
Compiling the text string contains requires a bit of overhead but once compiled it can be run with very fast. When dealing with larger matrices the overhead involved is insignificant, but for smaller ones it can have a noticeable impact. This is why the ability to precompile an equation is provided.
Original:
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
Precompiled:
<syntaxhighlight lang="java">
// precompile the equation
Sequence s = eq.compile("K = P*H'*inv( H*P*H' + R )");
// execute the results with out needing to recompile
s.perform();
</syntaxhighlight>
Both are equivalent, but if an equation is invoked multiple times the precompiled version can have a noticable improvement in performance. Using precompiled sequences also means that means that internal arrays are only declared once and allows the user to control when memory is created/destroyed.
To make it clear, precompiling is only recommended when dealing with smaller matrices or when tighter control over memory is required.
When an equation is precompiled you can still change the alias for a variable.
<syntaxhighlight lang="java">
eq.alias(0,"sum",0,"i");
Sequence s = eq.compile("sum = sum + i");
for( int i = 0; i < 10; i++ ) {
eq.alias(i,"i");
s.perform();
}
</syntaxhighlight>
This will sum up the numbers from 0 to 9.
== Debugging ==
There will be times when you pass in an equation and it throws some weird exception or just doesn't do what you expected. To see the tokens and sequence of operations set the second parameter in compile() or peform() to true.
For example:
<syntaxhighlight lang="java">
eq.process("y = z - H*x",true);
</syntaxhighlight>
When application is run it will print out
<syntaxhighlight lang="java">
Parsed tokens:
------------
VarMATRIX
ASSIGN
VarMATRIX
MINUS
VarMATRIX
TIMES
VarMATRIX
Operations:
------------
multiply-mm
subtract-mm
copy-mm
</syntaxhighlight>
= Alias =
To manipulate matrices in equations they need to be aliased. Both DenseMatrix64F and SimpleMatrix can be aliased. A copy of scalar numbers can also be aliased. When a variable is aliased a reference to the data is saved and a name associated with it.
<syntaxhighlight lang="java">
DenseMatrix64F x = new DenseMatrix64F(6,1);
eq.alias(x,"x");
</syntaxhighlight>
Multiple variables can be aliased at the same time too
<syntaxhighlight lang="java">
eq.alias(x,"x",P,"P",h,"Happy");
</syntaxhighlight>
As is shown above the string name for a variable does not have to be the same as Java name of the variable. Here is an example where an integer and double is aliased.
<syntaxhighlight lang="java">
int a = 6;
eq.alias(2.3,"distance",a,"a");
</syntaxhighlight>
After a variable has been aliased you can alias the same name again to change it. Here is an example of just that:
<syntaxhighlight lang="java">
for( int i = 0; i < 10; i++ ) {
eq.alias(i,"i");
// do stuff with i
}
</syntaxhighlight>
If after benchmarking your code and discover that the alias operation is slowing it down (a hashmap lookup is done internally) then you should consider the following faster, but uglier, alternative.
<syntaxhighlight lang="java">
VariableInteger i = eq.lookupVariable("i");
for( i.value = 0; i.value < 10; i.value++ ) {
// do stuff with i
}
</syntaxhighlight>
= Submatrices =
Sub-matrices can be read from and written to. It's easy to reference a sub-matrix inside of any matrix. A few examples are below.
<syntaxhighlight lang="java">
A(1:4,0:5)
</syntaxhighlight>
Here rows 1 to 4 (inclusive) and columns 0 to 5 (inclusive) compose the sub-matrix of A. The notation "a:b" indicates an integer set from 'a' to 'b', where 'a' and 'b' must be integers themselves. To specify every row or column do ":" or all rows and columns past a certain 'a' can be referenced with "a:". Finally, you can reference just a number by typeing it, e.g. "a".
<syntaxhighlight lang="java">
A(3:,3) <-- Rows from 3 to the last row and just column 3
A(:,:) <-- Every element in A
A(1,2) <-- The element in A at row=1,col=2
</syntaxhighlight>
The last example is a special case in that A(1,2) will return a double and not 1x1 matrix. Consider the following:
<syntaxhighlight lang="java">
A(0:2,0:2) = C/B(1,2)
</syntaxhighlight>
The results of dividing the elements of matrix C by the value of B(1,2) is assigned to the submatrix in A.
A named variable can also be used to reference elements as long as it's an integer.
<syntaxhighlight lang="java">
a = A(i,j)
</syntaxhighlight>
= Inline Matrix =
Matrices can be created inline and are defined inside of brackets. The matrix is specified in a row-major format, where a space separates elements in a row and a semi-colon indicates the end of a row.
<syntaxhighlight lang="java">
[5 0 0;0 4.0 0.0 ; 0 0 1]
</syntaxhighlight>
Defines a 3x3 matrix with 5,4,1 for it's diagonal elements. Visually this looks like:
<syntaxhighlight lang="java">
[ 5 0 0 ]
[ 0 4 0 ]
[ 0 0 1 ]
</syntaxhighlight>
An inline matrix can be used to concatenate other matrices together.
<syntaxhighlight lang="java">
[ A ; B ; C ]
</syntaxhighlight>
Will concatenate matrices A, B, and C along their rows. They must have the same number of columns. As you might guess, to concatenate along columns you would
<syntaxhighlight lang="java">
[ A B C ]
</syntaxhighlight>
and each matrix must have the same number of rows. Inner matrices are also allowed
<syntaxhighlight lang="java">
[ [1 2;2 3] [4;5] ; A ]
</syntaxhighlight>
which will result in
<syntaxhighlight lang="java">
[ 1 2 4 ]
[ 2 3 5 ]
[ A ]
</syntaxhighlight>
= Built in Functions and Variables =
'''Constants'''
<pre>
pi = Math.PI
e = Math.E
</pre>
'''Functions'''
<pre>
eye(N) Create an identity matrix which is N by N.
eye(A) Create an identity matrix which is A.numRows by A.numCols
normF(A) Frobenius normal of the matrix.
det(A) Determinant of the matrix
inv(A) Inverse of a matrix
pinv(A) Pseudo-inverse of a matrix
rref(A) Reduced row echelon form of A
trace(A) Trace of the matrix
zeros(r,c) Matrix full of zeros with r rows and c columns.
ones(r,c) Matrix full of ones with r rows and c columns.
diag(A) If a vector then returns a square matrix with diagonal elements filled with vector
diag(A) If a matrix then it returns the diagonal elements as a column vector
dot(A,B) Returns the dot product of two vectors as a double. Does not work on general matrices.
solve(A,B) Returns the solution X from A*X = B.
kron(A,B) Kronecker product
abs(A) Absolute value of A.
max(A) Element with the largest value in A.
min(A) Element with the smallest value in A.
pow(a,b) Scalar power of a to b. Can also be invoked with "a^b".
sin(a) Math.sin(a) for scalars only
cos(a) Math.cos(a) for scalars only
atan(a) Math.atan(a) for scalars only
atan2(a,b) Math.atan2(a,b) for scalars only
exp(a) Math.exp(a) for scalars and element-wise matrices
log(a) Math.log(a) for scalars and element-wise matrices
</pre>
'''Symbols'''
<pre>
'*' multiplication (Matrix-Matrix, Scalar-Matrix, Scalar-Scalar)
'+' addition (Matrix-Matrix, Scalar-Matrix, Scalar-Scalar)
'-' subtraction (Matrix-Matrix, Scalar-Matrix, Scalar-Scalar)
'/' divide (Matrix-Scalar, Scalar-Scalar)
'/' matrix solve "x=b/A" is equivalent to x=solve(A,b) (Matrix-Matrix)
'^' Scalar power. a^b is a to the power of b.
'\' left-divide. Same as divide but reversed. e.g. x=A\b is x=solve(A,b)
'.*' element-wise multiplication (Matrix-Matrix)
'./' element-wise division (Matrix-Matrix)
'.^' element-wise power. (scalar-scalar) (matrix-matrix) (scalar-matrix) (matrix-scalar)
'^' Scalar power. a^b is a to the power of b.
''' matrix transpose
'=' assignment by value (Matrix-Matrix, Scalar-Scalar)
</pre>
= User Defined Functions =
It's easy to add your own custom functions too. A custom function implements ManagerFunctions.Input1 or ManagerFunctions.InputN, depending on the number of inputs it takes. It is then added to the ManagerFunctions in Equation by call add(). The output matrix should also be resized.
[[Example Customizing Equations]]
ac7bc2ce70e2059d91552c63378e3e4ae86b120b
98
57
2015-04-01T02:39:52Z
Peter
1
wikitext
text/x-wiki
Levenberg-Marquardt is a popular non-linear optimization algorithm. This example demonstrate how a basic implementation of Levenberg-Marquardt can be created using EJML's [[Procedural|procedural]] interface. Unnecessary allocation of new memory is avoided by reshaping matrices. When a matrix is reshaped its width and height is changed but new memory is not declared unless the new shape requires more memory than is available.
The algorithm is provided a function, set of inputs, set of outputs, and an initial estimate of the parameters (this often works with all zeros). It finds the parameters that minimize the difference between the computed output and the observed output. A numerical Jacobian is used to estimate the function's gradient.
'''Note:''' This is a simple straight forward implementation of Levenberg-Marquardt and is not as robust as Minpack's implementation. If you are looking for a robust non-linear least-squares minimization library in Java check out [http://ddogleg.org DDogleg].
Github Code:
[https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/LevenbergMarquardt.java LevenbergMarquardt]
== Example Code ==
<syntaxhighlight lang="java">
/**
* <p>
* This is a straight forward implementation of the Levenberg-Marquardt (LM) algorithm. LM is used to minimize
* non-linear cost functions:<br>
* <br>
* S(P) = Sum{ i=1:m , [y<sub>i</sub> - f(x<sub>i</sub>,P)]<sup>2</sup>}<br>
* <br>
* where P is the set of parameters being optimized.
* </p>
*
* <p>
* In each iteration the parameters are updated using the following equations:<br>
* <br>
* P<sub>i+1</sub> = (H + λ I)<sup>-1</sup> d <br>
* d = (1/N) Sum{ i=1..N , (f(x<sub>i</sub>;P<sub>i</sub>) - y<sub>i</sub>) * jacobian(:,i) } <br>
* H = (1/N) Sum{ i=1..N , jacobian(:,i) * jacobian(:,i)<sup>T</sup> }
* </p>
* <p>
* Whenever possible the allocation of new memory is avoided. This is accomplished by reshaping matrices.
* A matrix that is reshaped won't grow unless the new shape requires more memory than it has available.
* </p>
* @author Peter Abeles
*/
public class LevenbergMarquardt {
// how much the numerical jacobian calculation perturbs the parameters by.
// In better implementation there are better ways to compute this delta. See Numerical Recipes.
private final static double DELTA = 1e-8;
private double initialLambda;
// the function that is optimized
private Function func;
// the optimized parameters and associated costs
private DenseMatrix64F param;
private double initialCost;
private double finalCost;
// used by matrix operations
private DenseMatrix64F d;
private DenseMatrix64F H;
private DenseMatrix64F negDelta;
private DenseMatrix64F tempParam;
private DenseMatrix64F A;
// variables used by the numerical jacobian algorithm
private DenseMatrix64F temp0;
private DenseMatrix64F temp1;
// used when computing d and H variables
private DenseMatrix64F tempDH;
// Where the numerical Jacobian is stored.
private DenseMatrix64F jacobian;
/**
* Creates a new instance that uses the provided cost function.
*
* @param funcCost Cost function that is being optimized.
*/
public LevenbergMarquardt( Function funcCost )
{
this.initialLambda = 1;
// declare data to some initial small size. It will grow later on as needed.
int maxElements = 1;
int numParam = 1;
this.temp0 = new DenseMatrix64F(maxElements,1);
this.temp1 = new DenseMatrix64F(maxElements,1);
this.tempDH = new DenseMatrix64F(maxElements,1);
this.jacobian = new DenseMatrix64F(numParam,maxElements);
this.func = funcCost;
this.param = new DenseMatrix64F(numParam,1);
this.d = new DenseMatrix64F(numParam,1);
this.H = new DenseMatrix64F(numParam,numParam);
this.negDelta = new DenseMatrix64F(numParam,1);
this.tempParam = new DenseMatrix64F(numParam,1);
this.A = new DenseMatrix64F(numParam,numParam);
}
public double getInitialCost() {
return initialCost;
}
public double getFinalCost() {
return finalCost;
}
public DenseMatrix64F getParameters() {
return param;
}
/**
* Finds the best fit parameters.
*
* @param initParam The initial set of parameters for the function.
* @param X The inputs to the function.
* @param Y The "observed" output of the function
* @return true if it succeeded and false if it did not.
*/
public boolean optimize( DenseMatrix64F initParam ,
DenseMatrix64F X ,
DenseMatrix64F Y )
{
configure(initParam,X,Y);
// save the cost of the initial parameters so that it knows if it improves or not
initialCost = cost(param,X,Y);
// iterate until the difference between the costs is insignificant
// or it iterates too many times
if( !adjustParam(X, Y, initialCost) ) {
finalCost = Double.NaN;
return false;
}
return true;
}
/**
* Iterate until the difference between the costs is insignificant
* or it iterates too many times
*/
private boolean adjustParam(DenseMatrix64F X, DenseMatrix64F Y,
double prevCost) {
// lambda adjusts how big of a step it takes
double lambda = initialLambda;
// the difference between the current and previous cost
double difference = 1000;
for( int iter = 0; iter < 20 || difference < 1e-6 ; iter++ ) {
// compute some variables based on the gradient
computeDandH(param,X,Y);
// try various step sizes and see if any of them improve the
// results over what has already been done
boolean foundBetter = false;
for( int i = 0; i < 5; i++ ) {
computeA(A,H,lambda);
if( !solve(A,d,negDelta) ) {
return false;
}
// compute the candidate parameters
subtract(param, negDelta, tempParam);
double cost = cost(tempParam,X,Y);
if( cost < prevCost ) {
// the candidate parameters produced better results so use it
foundBetter = true;
param.set(tempParam);
difference = prevCost - cost;
prevCost = cost;
lambda /= 10.0;
} else {
lambda *= 10.0;
}
}
// it reached a point where it can't improve so exit
if( !foundBetter )
break;
}
finalCost = prevCost;
return true;
}
/**
* Performs sanity checks on the input data and reshapes internal matrices. By reshaping
* a matrix it will only declare new memory when needed.
*/
protected void configure( DenseMatrix64F initParam , DenseMatrix64F X , DenseMatrix64F Y )
{
if( Y.getNumRows() != X.getNumRows() ) {
throw new IllegalArgumentException("Different vector lengths");
} else if( Y.getNumCols() != 1 || X.getNumCols() != 1 ) {
throw new IllegalArgumentException("Inputs must be a column vector");
}
int numParam = initParam.getNumElements();
int numPoints = Y.getNumRows();
if( param.getNumElements() != initParam.getNumElements() ) {
// reshaping a matrix means that new memory is only declared when needed
this.param.reshape(numParam,1, false);
this.d.reshape(numParam,1, false);
this.H.reshape(numParam,numParam, false);
this.negDelta.reshape(numParam,1, false);
this.tempParam.reshape(numParam,1, false);
this.A.reshape(numParam,numParam, false);
}
param.set(initParam);
// reshaping a matrix means that new memory is only declared when needed
temp0.reshape(numPoints,1, false);
temp1.reshape(numPoints,1, false);
tempDH.reshape(numPoints,1, false);
jacobian.reshape(numParam,numPoints, false);
}
/**
* Computes the d and H parameters. Where d is the average error gradient and
* H is an approximation of the hessian.
*/
private void computeDandH( DenseMatrix64F param , DenseMatrix64F x , DenseMatrix64F y )
{
func.compute(param,x, tempDH);
subtractEquals(tempDH, y);
computeNumericalJacobian(param,x,jacobian);
int numParam = param.getNumElements();
int length = x.getNumElements();
// d = average{ (f(x_i;p) - y_i) * jacobian(:,i) }
for( int i = 0; i < numParam; i++ ) {
double total = 0;
for( int j = 0; j < length; j++ ) {
total += tempDH.get(j,0)*jacobian.get(i,j);
}
d.set(i,0,total/length);
}
// compute the approximation of the hessian
multTransB(jacobian,jacobian,H);
scale(1.0/length,H);
}
/**
* A = H + lambda*I <br>
* <br>
* where I is an identity matrix.
*/
private void computeA( DenseMatrix64F A , DenseMatrix64F H , double lambda )
{
final int numParam = param.getNumElements();
A.set(H);
for( int i = 0; i < numParam; i++ ) {
A.set(i,i, A.get(i,i) + lambda);
}
}
/**
* Computes the "cost" for the parameters given.
*
* cost = (1/N) Sum (f(x;p) - y)^2
*/
private double cost( DenseMatrix64F param , DenseMatrix64F X , DenseMatrix64F Y)
{
func.compute(param,X, temp0);
double error = diffNormF(temp0,Y);
return error*error / (double)X.numRows;
}
/**
* Computes a simple numerical Jacobian.
*
* @param param The set of parameters that the Jacobian is to be computed at.
* @param pt The point around which the Jacobian is to be computed.
* @param deriv Where the jacobian will be stored
*/
protected void computeNumericalJacobian( DenseMatrix64F param ,
DenseMatrix64F pt ,
DenseMatrix64F deriv )
{
double invDelta = 1.0/DELTA;
func.compute(param,pt, temp0);
// compute the jacobian by perturbing the parameters slightly
// then seeing how it effects the results.
for( int i = 0; i < param.numRows; i++ ) {
param.data[i] += DELTA;
func.compute(param,pt, temp1);
// compute the difference between the two parameters and divide by the delta
add(invDelta,temp1,-invDelta,temp0,temp1);
// copy the results into the jacobian matrix
System.arraycopy(temp1.data,0,deriv.data,i*pt.numRows,pt.numRows);
param.data[i] -= DELTA;
}
}
/**
* The function that is being optimized.
*/
public interface Function {
/**
* Computes the output for each value in matrix x given the set of parameters.
*
* @param param The parameter for the function.
* @param x the input points.
* @param y the resulting output.
*/
public void compute( DenseMatrix64F param , DenseMatrix64F x , DenseMatrix64F y );
}
}
</syntaxhighlight>
d4a869ecca69dee2c0fe2b16bb66a96a0d647d28
Solving Linear Systems
0
20
58
2015-03-22T16:02:23Z
Peter
1
Created page with "A fundamental problem in linear algebra is solving systems of linear equations. A linear system is any equation than can be expressed in this format: <pre> A*x = b </pre> w..."
wikitext
text/x-wiki
A fundamental problem in linear algebra is solving systems of linear equations. A linear system is any equation than can be expressed in this format:
<pre>
A*x = b
</pre>
where ''A'' is m by n, ''x'' is n by o, and ''b'' is m by o. Most of the time o=1. The best way to solve these equations depends on the structure of the matrix ''A''. For example, if it's square and positive definite then [http://ejml.org/javadoc/org/ejml/interfaces/decomposition/CholeskyDecomposition.html Cholesky] decomposition is the way to go. On the other hand if it is tall m > n, then [http://ejml.org/javadoc/org/ejml/interfaces/decomposition/QRDecomposition.html QR] is the way to go.
Each of the three interfaces (Procedural, SimpleMatrix, Equations) provides high level ways to solve linear systems which don't require you to specify the underlying algorithm. While convenient, these are not always the best approach in high performance situations. They create/destroy memory and don't provide you with access to their full functionality. If the best performance is needed then you should use a LinearSolver or one of its derived interfaces for a specific family of algorithms.
First a description is provided on how to solve linear systems using Procedural, SimpleMatrix, and then Equations. After that an overview of LinearSolver is presented.
= High Level Interfaces =
All high level interfaces essentially use the same code at the low level, which is the Procedural interface. This means that they have the same strengths and weaknesses. Their strength is simplicity. They will automatically select LU and QR decomposition, depending on the matrix's shape.
You should use the lower level LinearSolver if any of the following are true:
* Your matrix can some times be singular
* You wish to perform a pseudo inverse
* You need to avoid creating new memory
* You need to select a specific decomposition
* You need access to the low level decomposition
The case of singular or nearly singular matrices is worth discussing more. All of these high level approaches do attempt to detect singular matrices. The problem is that they aren't reliable and no tweaking of thresholds will make them reliable. If you are in a situation where you need to come up with a solution and it might be singular then you really need to know what you are doing. If a system is singular it means there are an infinite number of solutions.
== Procedural ==
The way to solve linear systems in the Procedural interface is with CommonOps.solve(). Make sure you check it's return value to see if it failed! It ''might'' fail if the matrix is singular or nearly singular.
<syntaxhighlight lang="java">
if( !CommonOps.solve(A,b,x) ) {
throw new IllegalArgument("Singular matrix");
}
</syntaxhighlight>
== SimpleMatrix ==
<syntaxhighlight lang="java">
try {
SimpleMatrix x = A.solve(b);
} catch ( SingularMatrixException e ) {
throw new IllegalArgument("Singular matrix");
}
</syntaxhighlight>
SingularMatrixException is a RuntimeException and you technically don't have to catch it. If you don't catch it, it will take down your whole application if the matrix is singular!
== Equations ==
<syntaxhighlight lang="java">
eq.process("x=b/A");
</syntaxhighlight>
If it's singular it will throw a RuntimeException.
= Low level Linear Solvers =
Low level linear solvers in EJML all implement the [http://ejml.org/javadoc/org/ejml/interfaces/linsol/LinearSolver.html LinearSolver] interface. It provides a lot more power than the high level interfaces but is also more difficult to use and require more diligence. For example, you can no longer assume that it won't modify the input matrices!
== LinearSolver ==
The LinearSolver interface is designed to be easy to use and to provide most of the power that comes from directly using a decomposition would provide.
<syntaxhighlight lang="java">
public interface LinearSolver< T extends Matrix64F> {
public boolean setA( T A );
public T getA();
public double quality();
public void solve( T B , T X );
public void invert( T A_inv );
public boolean modifiesA();
public boolean modifiesB();
public <D extends DecompositionInterface>D getDecomposition();
}
</syntaxhighlight>
Each linear solver implementation is built around a different decomposition. The best way to create a new LinearSolver instance is with [http://ejml.org/javadoc/org/ejml/factory/LinearSolverFactory.html LinearSolverFactory]. It provides an easy way to select the correct solver without plowing through the documentation.
Two steps are required to solve a system with a LinearSolver, as is shown below:
<syntaxhighlight lang="java">
LinearSolver<DenseMatrix64F> solver = LinearSolverFactory.qr(A.numRows,A.numCols);
if( !solver.setA(A) ) {
throw new IllegalArgument("Singular matrix");
}
if( solver.quality() <= 1e-8 )
throw new IllegalArgument("Nearly singular matrix");
solver.solve(b,x);
</syntaxhighlight>
As with the high-level interfaces you can't trust algorithms such as QR, LU, or Cholesky to detect singular matrices! Sometimes they will work and sometimes they will not. Even adjusting the quality threshold won't fix the problem in all situations.
Additional capabilities included in LinearSolver are:
* invert()
** Will invert a matrix more efficiently than solve() can.
* quality()
** Returns a positive number which if it is small indicates a singular or nearly singular system system. Much faster to compute than the SVD.
* modifiesA() and modifiesB()
** To reduce memory requirements, most LinearSolvers will modify the 'A' and store the decomposition inside of it. Some do the same for 'B' These function tell the user if the inputs are being modified or not.
* getDecomposition()
** Provides access to the internal decomposition used.
== LinearSolverSafe ==
If the input matrices 'A' and 'B' should not be modified then LinearSolverSafe is a convenient way to ensure that precondition:
<syntaxhighlight lang="java">
LinearSolver<DenseMatrix64F> solver = LinearSolverFactory.leastSquares();
solver = new LinearSolverSafe<DenseMatrix64F>(solver);
</syntaxhighlight>
== Pseudo Inverse ==
EJML provides two different pseudo inverses. One is SVD based and the other QRP. QRP stands for QR with column pivots. QRP can be thought of as a light weight SVD, much faster to compute but doesn't handle singular matrices quite as well.
<syntaxhighlight lang="java">
LinearSolver<DenseMatrix64F> pinv = LinearSolverFactory.pseudoInverse(true);
</syntaxhighlight>
This will create an SVD based pseudo inverse. Otherwise if you specify false then it will create a QRP pseudo-inverse.
== AdjustableLinearSolver ==
In situations where rows from the linear system are added or removed (see [[Example Polynomial Fitting]]) an AdjustableLinearSolver can be used to efficiently resolve the modified system. AdjustableLinearSolver is an extension of LinearSolver that adds addRowToA() and removeRowFromA(). These functions add and remove rows from A respectively. After being involved the solution can be recomputed by calling solve() again.
<syntaxhighlight lang="java">
AdjustableLinearSolver solver = LinearSolverFactory.adjustable();
if( !solver.setA(A) ) {
throw new IllegalArgument("Singular matrix");
}
solver.solve(b,x);
// add a row
double row[] = new double[N];
... code ...
solver.addRowToA(row,2);
.... adjust b and x ....
solver.solve(b,x);
// remove a row
solver.removeRowFromA(7);
.... adjust b and x ....
solver.solve(b,x);
</syntaxhighlight>
a074e15557866fbc059b6d9f2ba8f50b328d4647
Main Page
0
1
59
23
2015-03-22T16:10:32Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" width="500pt" align="center" |
{|width="280pt" style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.27''
|-
| '''Date:''' ''UNRELEASED''
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [http://code.google.com/p/efficient-java-matrix-library/issues/list Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="850pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
** Incomplete Support
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
e50108570465a8df09eec64a97f07d20558a4070
69
59
2015-03-23T14:30:32Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
WARNING: The new webpage contains material that's not in the current stable release!
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" width="500pt" align="center" |
{|width="280pt" style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.27''
|-
| '''Date:''' ''UNRELEASED''
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [http://code.google.com/p/efficient-java-matrix-library/issues/list Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="850pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
** Incomplete Support
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
41aad808e1a53f3bdae57171263c2bf3d967ccd7
74
69
2015-03-24T05:00:48Z
Peter
1
Protected "[[Main Page]]" ([Edit=Allow only administrators] (indefinite) [Move=Allow only administrators] (indefinite))
wikitext
text/x-wiki
__NOTOC__
<center>
WARNING: The new webpage contains material that's not in the current stable release!
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" width="500pt" align="center" |
{|width="280pt" style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.27''
|-
| '''Date:''' ''UNRELEASED''
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [http://code.google.com/p/efficient-java-matrix-library/issues/list Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="850pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
** Incomplete Support
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
41aad808e1a53f3bdae57171263c2bf3d967ccd7
Tutorial Complex
0
21
60
2015-03-22T16:29:29Z
Peter
1
Created page with "The introduction of complex matrices to EJML is very recent and he best way to handle them is still undecided. The only way to manipulate complex matrices is using a procedur..."
wikitext
text/x-wiki
The introduction of complex matrices to EJML is very recent and he best way to handle them is still undecided. The only way to manipulate complex matrices is using a procedural interface. The complex analog for each procedural class can be found for the complex case by adding "C" infront of it. Here are a few examples:
{| class="wikitable"
! Real !! Complex
|-
| CommonOps || CCommonOps
|-
| MatrixFeature || CMatrixFeatures
|-
| NormOps || CNormOps
|-
| RandomMatrices || CRandomMatrices
|-
| SpecializedOps || CSpecializedOps
|}
The complex analog of DenseMatrix64F is DenseMatrixC64F. The following functions provide different ways to convert one matrix type into the other.
{| class="wikitable"
! Function !! Description
|-
| CommonOps.convert() || Converts a real matrix into a complex matrix
|-
| CommonOps.stripReal() || Strips the real component and places it into a real matrix.
|-
| CommonOps.stripImaginary() || Strips the imaginary component and places it into a real matrix.
|-
| CommonOps.magnitude() || Computes the magnitude of each element and places it into a real matrix.
|}
There is also Complex64F which contains a single complex number. [[Example Complex Math]] does a good job covering how to manipulate those objects.
2086ac61edfe265216d8dabce83d77ae5d90e414
61
60
2015-03-22T16:29:46Z
Peter
1
wikitext
text/x-wiki
The introduction of complex matrices to EJML is very recent and he best way to handle them is still undecided. The only way to manipulate complex matrices is using a procedural interface. The complex analog for each procedural class can be found for the complex case by adding "C" infront of it. Here are a few examples:
{| class="wikitable"
! Real !! Complex
|-
| CommonOps || CCommonOps
|-
| MatrixFeature || CMatrixFeatures
|-
| NormOps || CNormOps
|-
| RandomMatrices || CRandomMatrices
|-
| SpecializedOps || CSpecializedOps
|}
The complex analog of DenseMatrix64F is DenseMatrixC64F. The following functions provide different ways to convert one matrix type into the other.
{| class="wikitable"
! Function !! Description
|-
| CCommonOps.convert() || Converts a real matrix into a complex matrix
|-
| CCommonOps.stripReal() || Strips the real component and places it into a real matrix.
|-
| CCommonOps.stripImaginary() || Strips the imaginary component and places it into a real matrix.
|-
| CCommonOps.magnitude() || Computes the magnitude of each element and places it into a real matrix.
|}
There is also Complex64F which contains a single complex number. [[Example Complex Math]] does a good job covering how to manipulate those objects.
db4f8dea4cb079ccbc149b5e295a12ba32b53e0d
63
61
2015-03-22T16:37:58Z
Peter
1
Peter moved page [[Complex Math]] to [[Tutorial Complex]]
wikitext
text/x-wiki
The introduction of complex matrices to EJML is very recent and he best way to handle them is still undecided. The only way to manipulate complex matrices is using a procedural interface. The complex analog for each procedural class can be found for the complex case by adding "C" infront of it. Here are a few examples:
{| class="wikitable"
! Real !! Complex
|-
| CommonOps || CCommonOps
|-
| MatrixFeature || CMatrixFeatures
|-
| NormOps || CNormOps
|-
| RandomMatrices || CRandomMatrices
|-
| SpecializedOps || CSpecializedOps
|}
The complex analog of DenseMatrix64F is DenseMatrixC64F. The following functions provide different ways to convert one matrix type into the other.
{| class="wikitable"
! Function !! Description
|-
| CCommonOps.convert() || Converts a real matrix into a complex matrix
|-
| CCommonOps.stripReal() || Strips the real component and places it into a real matrix.
|-
| CCommonOps.stripImaginary() || Strips the imaginary component and places it into a real matrix.
|-
| CCommonOps.magnitude() || Computes the magnitude of each element and places it into a real matrix.
|}
There is also Complex64F which contains a single complex number. [[Example Complex Math]] does a good job covering how to manipulate those objects.
db4f8dea4cb079ccbc149b5e295a12ba32b53e0d
Complex Math
0
22
64
2015-03-22T16:37:58Z
Peter
1
Peter moved page [[Complex Math]] to [[Tutorial Complex]]
wikitext
text/x-wiki
#REDIRECT [[Tutorial Complex]]
ddab99f7dc90ba732aa20d591d3cf4f58c0dd778
Input and Output
0
23
66
2015-03-23T01:04:20Z
Peter
1
Created page with "EJML provides several different methods for loading, saving, and displaying a matrix. A matrix can be saved and loaded from a file, displayed visually in a window, printed to..."
wikitext
text/x-wiki
EJML provides several different methods for loading, saving, and displaying a matrix. A matrix can be saved and loaded from a file, displayed visually in a window, printed to the console, created from raw arrays or strings.
__TOC__
= Text Output =
A matrix can be printed to standard out using its built in ''print()'' command, this works for both DenseMatrix64F and SimpleMatrix. To create a custom output the user can provide a formatting string that is compatible with printf().
Code:
<syntaxhighlight lang="java">
public static void main( String []args ) {
DenseMatrix64F A = new DenseMatrix64F(2,3,true,1.1,2.34,3.35436,4345,59505,0.00001234);
A.print();
System.out.println();
A.print("%e");
System.out.println();
A.print("%10.2f");
}
</syntaxhighlight>
Output:
<pre>
Type = dense real , numRows = 2 , numCols = 3
1.100 2.340 3.354
4345.000 59505.000 0.000
Type = dense real , numRows = 2 , numCols = 3
1.100000e+00 2.340000e+00 3.354360e+00
4.345000e+03 5.950500e+04 1.234000e-05
Type = dense real , numRows = 2 , numCols = 3
1.10 2.34 3.35
4345.00 59505.00 0.00
</pre>
= CSV Input/Outut =
A Column Space Value (CSV) reader and writer is provided by EJML. The advantage of this file format is that it's human readable, the disadvantage is that its large and slow. Two CSV formats are supported, one where the first line specifies the matrix dimension and the other the user specifies it pro grammatically.
In the example below, the matrix size and type is specified in the first line; row, column, and real/complex. The remainder of the file contains the value of each element in the matrix in a row-major format. A file containing
<pre>
2 3 real
2.4 6.7 9
-2 3 5
</pre>
would describe a real matrix with 2 rows and 3 columns.
DenseMatrix64F Example:
<syntaxhighlight lang="java">
public static void main( String args[] ) {
DenseMatrix64F A = new DenseMatrix64F(2,3,true,new double[]{1,2,3,4,5,6});
try {
MatrixIO.saveCSV(A, "matrix_file.csv");
DenseMatrix64F B = MatrixIO.loadCSV("matrix_file.csv");
B.print();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
</syntaxhighlight>
SimpleMatrix Example:
<syntaxhighlight lang="java">
public static void main( String args[] ) {
SimpleMatrix A = new SimpleMatrix(2,3,true,new double[]{1,2,3,4,5,6});
try {
A.saveToFileCSV("matrix_file.csv");
SimpleMatrix B = SimpleMatrix.loadCSV("matrix_file.csv");
B.print();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
</syntaxhighlight>
= Serialized Binary Input/Output =
DenseMatrix64F is a serializable object and is fully compatible with any Java serialization routine. MatrixIO provides save() and load() functions for saving to and reading from a file. The matrix is saved as a Java binary serialized object. SimpleMatrix provides its own function (that are wrappers around MatrixIO) for saving and loading from files.
MatrixIO Example:
<syntaxhighlight lang="java">
public static void main( String args[] ) {
DenseMatrix64F A = new DenseMatrix64F(2,3,true,new double[]{1,2,3,4,5,6});
try {
MatrixIO.saveBin(A,"matrix_file.data");
DenseMatrix64F B = MatrixIO.loadBin("matrix_file.data");
B.print();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
</syntaxhighlight>
*NOTE* in v0.18 saveBin/loadBin is actually saveXML/loadXML, which is a mistake since its not in an xml format.
SimpleMatrix Example:
<syntaxhighlight lang="java">
public static void main( String args[] ) {
SimpleMatrix A = new SimpleMatrix(2,3,true,new double[]{1,2,3,4,5,6});
try {
A.saveToFileBinary("matrix_file.data");
SimpleMatrix B = SimpleMatrix.loadBinary("matrix_file.data");
B.print();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
</syntaxhighlight>
= Visual Display =
Understanding the state of a matrix from text output can be difficult, especially for large matrices. To help in these situations a visual way of viewing a matrix is provided in MatrixVisualization. By calling MatrixVisualization.show() a window will be created that shows the matrix. Positive elements will appear as a shade of red, negative ones as a shade of blue, and zeros as black. How red or blue an element is depends on its magnitude.
Example Code:
<syntaxhighlight lang="java">
public static void main( String args[] ) {
DenseMatrix64F A = new DenseMatrix64F(4,4,true,
0,2,3,4,-2,0,2,3,-3,-2,0,2,-4,-3,-2,0);
MatrixIO.show(A,"Small Matrix");
DenseMatrix64F B = new DenseMatrix64F(25,50);
for( int i = 0; i < 25; i++ )
B.set(i,i,i+1);
MatrixIO.show(B,"Larger Diagonal Matrix");
}
</syntaxhighlight>
Output:
{|
| http://ejml.org/wiki/MY_IMAGES/small_matrix.gif || http://ejml.org/wiki/MY_IMAGES/larger_matrix.gif
|}
bf94f23d982df397fb1d020729b6a67f60aece03
Example Kalman Filter
0
10
67
46
2015-03-23T01:05:06Z
Peter
1
wikitext
text/x-wiki
Here are three examples that demonstrate how a [http://en.wikipedia.org/wiki/Kalman_filter Kalman filter] can be created using different API's in EJML. Each API has different advantages and disadvantages. High level interfaces tend to be easier to use, but sacrifice efficiency. The intent of this article is to illustrate this trend empirically. Runtime performance of each approach is shown below. To see how complex and readable each approach is check out the source code below.
<center>
{| class="wikitable"
! API !! Execution Time (ms)
|-
| SimpleMatrix || 1875
|-
| Procedural || 1280
|-
| Equations || 1698
|}
</center>
__TOC__
Code on GitHub:
* [https://github.com/lessthanoptimal/ejml/blob/master/examples/src/org/ejml/example/KalmanFilterSimple.java KalmanFilterSimple]
* [https://github.com/lessthanoptimal/ejml/blob/master/examples/src/org/ejml/example/KalmanFilterOperations.java KalmanFilterProcedural]
* [https://github.com/lessthanoptimal/ejml/blob/master/examples/src/org/ejml/example/KalmanFilterEquation.java KalmanFilterEquation]
----
'''NOTE:''' While the Kalman filter code below is fully functional and will work well in most applications, it might not be the best. Other variants seek to improve stability and/or avoid the matrix inversion. It's worth point out that some people say you should never invert the matrix in a Kalman filter. There are applications, such as target tracking, where matrix inversion of the innovation covariance is helpful as a preprocessing step.
== SimpleMatrix Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter implemented using SimpleMatrix. The code tends to be easier to
* read and write, but the performance is degraded due to excessive creation/destruction of
* memory and the use of more generic algorithms. This also demonstrates how code can be
* seamlessly implemented using both SimpleMatrix and DenseMatrix64F. This allows code
* to be quickly prototyped or to be written either by novices or experts.
*
* @author Peter Abeles
*/
public class KalmanFilterSimple implements KalmanFilter{
// kinematics description
private SimpleMatrix F;
private SimpleMatrix Q;
private SimpleMatrix H;
// sytem state estimate
private SimpleMatrix x;
private SimpleMatrix P;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
this.F = new SimpleMatrix(F);
this.Q = new SimpleMatrix(Q);
this.H = new SimpleMatrix(H);
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x = new SimpleMatrix(x);
this.P = new SimpleMatrix(P);
}
@Override
public void predict() {
// x = F x
x = F.mult(x);
// P = F P F' + Q
P = F.mult(P).mult(F.transpose()).plus(Q);
}
@Override
public void update(DenseMatrix64F _z, DenseMatrix64F _R) {
// a fast way to make the matrices usable by SimpleMatrix
SimpleMatrix z = SimpleMatrix.wrap(_z);
SimpleMatrix R = SimpleMatrix.wrap(_R);
// y = z - H x
SimpleMatrix y = z.minus(H.mult(x));
// S = H P H' + R
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
// K = PH'S^(-1)
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
// x = x + Ky
x = x.plus(K.mult(y));
// P = (I-kH)P = P - KHP
P = P.minus(K.mult(H).mult(P));
}
@Override
public DenseMatrix64F getState() {
return x.getMatrix();
}
@Override
public DenseMatrix64F getCovariance() {
return P.getMatrix();
}
}
</syntaxhighlight>
== Procedural Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter that is implemented using the operations API, which is procedural. Much of the excessive
* memory creation/destruction has been reduced from the KalmanFilterSimple. A specialized solver is
* under to invert the SPD matrix.
*
* @author Peter Abeles
*/
public class KalmanFilterOperations implements KalmanFilter{
// kinematics description
private DenseMatrix64F F;
private DenseMatrix64F Q;
private DenseMatrix64F H;
// system state estimate
private DenseMatrix64F x;
private DenseMatrix64F P;
// these are predeclared for efficiency reasons
private DenseMatrix64F a,b;
private DenseMatrix64F y,S,S_inv,c,d;
private DenseMatrix64F K;
private LinearSolver<DenseMatrix64F> solver;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
this.F = F;
this.Q = Q;
this.H = H;
int dimenX = F.numCols;
int dimenZ = H.numRows;
a = new DenseMatrix64F(dimenX,1);
b = new DenseMatrix64F(dimenX,dimenX);
y = new DenseMatrix64F(dimenZ,1);
S = new DenseMatrix64F(dimenZ,dimenZ);
S_inv = new DenseMatrix64F(dimenZ,dimenZ);
c = new DenseMatrix64F(dimenZ,dimenX);
d = new DenseMatrix64F(dimenX,dimenZ);
K = new DenseMatrix64F(dimenX,dimenZ);
x = new DenseMatrix64F(dimenX,1);
P = new DenseMatrix64F(dimenX,dimenX);
// covariance matrices are symmetric positive semi-definite
solver = LinearSolverFactory.symmPosDef(dimenX);
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x.set(x);
this.P.set(P);
}
@Override
public void predict() {
// x = F x
mult(F,x,a);
x.set(a);
// P = F P F' + Q
mult(F,P,b);
multTransB(b,F, P);
addEquals(P,Q);
}
@Override
public void update(DenseMatrix64F z, DenseMatrix64F R) {
// y = z - H x
mult(H,x,y);
subtract(z, y, y);
// S = H P H' + R
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
// K = PH'S^(-1)
if( !solver.setA(S) ) throw new RuntimeException("Invert failed");
solver.invert(S_inv);
multTransA(H,S_inv,d);
mult(P,d,K);
// x = x + Ky
mult(K,y,a);
addEquals(x,a);
// P = (I-kH)P = P - (KH)P = P-K(HP)
mult(H,P,c);
mult(K,c,b);
subtractEquals(P, b);
}
@Override
public DenseMatrix64F getState() {
return x;
}
@Override
public DenseMatrix64F getCovariance() {
return P;
}
}
</syntaxhighlight>
== Equations Example ==
<syntaxhighlight lang="java">
/**
* Example of how the equation interface can greatly simplify code
*
* @author Peter Abeles
*/
public class KalmanFilterEquation implements KalmanFilter{
// system state estimate
private DenseMatrix64F x;
private DenseMatrix64F P;
private Equation eq;
// Storage for precompiled code for predict and update
Sequence predictX,predictP;
Sequence updateY,updateK,updateX,updateP;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
int dimenX = F.numCols;
x = new DenseMatrix64F(dimenX,1);
P = new DenseMatrix64F(dimenX,dimenX);
eq = new Equation();
// Provide aliases between the symbolic variables and matrices we normally interact with
// The names do not have to be the same.
eq.alias(x,"x",P,"P",Q,"Q",F,"F",H,"H");
// Dummy matrix place holder to avoid compiler errors. Will be replaced later on
eq.alias(new DenseMatrix64F(1,1),"z");
eq.alias(new DenseMatrix64F(1,1),"R");
// Pre-compile so that it doesn't have to compile it each time it's invoked. More cumbersome
// but for small matrices the overhead is significant
predictX = eq.compile("x = F*x");
predictP = eq.compile("P = F*P*F' + Q");
updateY = eq.compile("y = z - H*x");
updateK = eq.compile("K = P*H'*inv( H*P*H' + R )");
updateX = eq.compile("x = x + K*y");
updateP = eq.compile("P = P-K*(H*P)");
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x.set(x);
this.P.set(P);
}
@Override
public void predict() {
predictX.perform();
predictP.perform();
}
@Override
public void update(DenseMatrix64F z, DenseMatrix64F R) {
// Alias will overwrite the reference to the previous matrices with the same name
eq.alias(z,"z"); eq.alias(R,"R");
updateY.perform();
updateK.perform();
updateX.perform();
updateP.perform();
}
@Override
public DenseMatrix64F getState() {
return x;
}
@Override
public DenseMatrix64F getCovariance() {
return P;
}
}
</syntaxhighlight>
7a4577456dcc0c886926bfe206a1f8693742ed41
96
67
2015-04-01T02:31:36Z
Peter
1
wikitext
text/x-wiki
Here are three examples that demonstrate how a [http://en.wikipedia.org/wiki/Kalman_filter Kalman filter] can be created using different API's in EJML. Each API has different advantages and disadvantages. High level interfaces tend to be easier to use, but sacrifice efficiency. The intent of this article is to illustrate this trend empirically. Runtime performance of each approach is shown below. To see how complex and readable each approach is check out the source code below.
<center>
{| class="wikitable"
! API !! Execution Time (ms)
|-
| SimpleMatrix || 1875
|-
| Procedural || 1280
|-
| Equations || 1698
|}
</center>
__TOC__
Code on GitHub:
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/KalmanFilterSimple.java KalmanFilterSimple]
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/KalmanFilterOperations.java KalmanFilterProcedural]
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/KalmanFilterEquation.java KalmanFilterEquation]
----
'''NOTE:''' While the Kalman filter code below is fully functional and will work well in most applications, it might not be the best. Other variants seek to improve stability and/or avoid the matrix inversion. It's worth point out that some people say you should never invert the matrix in a Kalman filter. There are applications, such as target tracking, where matrix inversion of the innovation covariance is helpful as a preprocessing step.
== SimpleMatrix Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter implemented using SimpleMatrix. The code tends to be easier to
* read and write, but the performance is degraded due to excessive creation/destruction of
* memory and the use of more generic algorithms. This also demonstrates how code can be
* seamlessly implemented using both SimpleMatrix and DenseMatrix64F. This allows code
* to be quickly prototyped or to be written either by novices or experts.
*
* @author Peter Abeles
*/
public class KalmanFilterSimple implements KalmanFilter{
// kinematics description
private SimpleMatrix F,Q,H;
// sytem state estimate
private SimpleMatrix x,P;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
this.F = new SimpleMatrix(F);
this.Q = new SimpleMatrix(Q);
this.H = new SimpleMatrix(H);
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x = new SimpleMatrix(x);
this.P = new SimpleMatrix(P);
}
@Override
public void predict() {
// x = F x
x = F.mult(x);
// P = F P F' + Q
P = F.mult(P).mult(F.transpose()).plus(Q);
}
@Override
public void update(DenseMatrix64F _z, DenseMatrix64F _R) {
// a fast way to make the matrices usable by SimpleMatrix
SimpleMatrix z = SimpleMatrix.wrap(_z);
SimpleMatrix R = SimpleMatrix.wrap(_R);
// y = z - H x
SimpleMatrix y = z.minus(H.mult(x));
// S = H P H' + R
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
// K = PH'S^(-1)
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
// x = x + Ky
x = x.plus(K.mult(y));
// P = (I-kH)P = P - KHP
P = P.minus(K.mult(H).mult(P));
}
@Override
public DenseMatrix64F getState() {
return x.getMatrix();
}
@Override
public DenseMatrix64F getCovariance() {
return P.getMatrix();
}
}
</syntaxhighlight>
== Procedural Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter that is implemented using the operations API, which is procedural. Much of the excessive
* memory creation/destruction has been reduced from the KalmanFilterSimple. A specialized solver is
* under to invert the SPD matrix.
*
* @author Peter Abeles
*/
public class KalmanFilterOperations implements KalmanFilter{
// kinematics description
private DenseMatrix64F F,Q,H;
// system state estimate
private DenseMatrix64F x,P;
// these are predeclared for efficiency reasons
private DenseMatrix64F a,b;
private DenseMatrix64F y,S,S_inv,c,d;
private DenseMatrix64F K;
private LinearSolver<DenseMatrix64F> solver;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
this.F = F;
this.Q = Q;
this.H = H;
int dimenX = F.numCols;
int dimenZ = H.numRows;
a = new DenseMatrix64F(dimenX,1);
b = new DenseMatrix64F(dimenX,dimenX);
y = new DenseMatrix64F(dimenZ,1);
S = new DenseMatrix64F(dimenZ,dimenZ);
S_inv = new DenseMatrix64F(dimenZ,dimenZ);
c = new DenseMatrix64F(dimenZ,dimenX);
d = new DenseMatrix64F(dimenX,dimenZ);
K = new DenseMatrix64F(dimenX,dimenZ);
x = new DenseMatrix64F(dimenX,1);
P = new DenseMatrix64F(dimenX,dimenX);
// covariance matrices are symmetric positive semi-definite
solver = LinearSolverFactory.symmPosDef(dimenX);
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x.set(x);
this.P.set(P);
}
@Override
public void predict() {
// x = F x
mult(F,x,a);
x.set(a);
// P = F P F' + Q
mult(F,P,b);
multTransB(b,F, P);
addEquals(P,Q);
}
@Override
public void update(DenseMatrix64F z, DenseMatrix64F R) {
// y = z - H x
mult(H,x,y);
subtract(z, y, y);
// S = H P H' + R
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
// K = PH'S^(-1)
if( !solver.setA(S) ) throw new RuntimeException("Invert failed");
solver.invert(S_inv);
multTransA(H,S_inv,d);
mult(P,d,K);
// x = x + Ky
mult(K,y,a);
addEquals(x,a);
// P = (I-kH)P = P - (KH)P = P-K(HP)
mult(H,P,c);
mult(K,c,b);
subtractEquals(P, b);
}
@Override
public DenseMatrix64F getState() {
return x;
}
@Override
public DenseMatrix64F getCovariance() {
return P;
}
}
</syntaxhighlight>
== Equations Example ==
<syntaxhighlight lang="java">
/**
* Example of how the equation interface can greatly simplify code
*
* @author Peter Abeles
*/
public class KalmanFilterEquation implements KalmanFilter{
// system state estimate
private DenseMatrix64F x,P;
private Equation eq;
// Storage for precompiled code for predict and update
Sequence predictX,predictP;
Sequence updateY,updateK,updateX,updateP;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
int dimenX = F.numCols;
x = new DenseMatrix64F(dimenX,1);
P = new DenseMatrix64F(dimenX,dimenX);
eq = new Equation();
// Provide aliases between the symbolic variables and matrices we normally interact with
// The names do not have to be the same.
eq.alias(x,"x",P,"P",Q,"Q",F,"F",H,"H");
// Dummy matrix place holder to avoid compiler errors. Will be replaced later on
eq.alias(new DenseMatrix64F(1,1),"z");
eq.alias(new DenseMatrix64F(1,1),"R");
// Pre-compile so that it doesn't have to compile it each time it's invoked. More cumbersome
// but for small matrices the overhead is significant
predictX = eq.compile("x = F*x");
predictP = eq.compile("P = F*P*F' + Q");
updateY = eq.compile("y = z - H*x");
updateK = eq.compile("K = P*H'*inv( H*P*H' + R )");
updateX = eq.compile("x = x + K*y");
updateP = eq.compile("P = P-K*(H*P)");
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x.set(x);
this.P.set(P);
}
@Override
public void predict() {
predictX.perform();
predictP.perform();
}
@Override
public void update(DenseMatrix64F z, DenseMatrix64F R) {
// Alias will overwrite the reference to the previous matrices with the same name
eq.alias(z,"z"); eq.alias(R,"R");
updateY.perform();
updateK.perform();
updateX.perform();
updateP.perform();
}
@Override
public DenseMatrix64F getState() {
return x;
}
@Override
public DenseMatrix64F getCovariance() {
return P;
}
}
</syntaxhighlight>
8eb0cee850ea1ce181f032bc61a60c69cdd660d2
Performance
0
7
68
19
2015-03-23T01:12:20Z
Peter
1
wikitext
text/x-wiki
= How does EJML compare? =
There are several issues to consider when selecting a linear algebra library; runtime speed, memory consumption, and stability. All three are very important but speed tends to get the most attention. [https://code.google.com/p/java-matrix-benchmark/ Java Matrix Benchmark] was developed at the same time as EJML and is used to evaluate the most popular linear algebra libraries written in Java. The general takeaway from those results is that EJML is one of the fastest single threaded libraries and in many instances is competitive with multi-threaded libraries. It is among the most stable and more memory efficient too.
<center>
[summary.png]
</center>
= Fastest Interface? =
Another question when using EJML is: ''Which interface should I use for high performance computing?'' In general you can get the most performance out of the procedural interface. However, there are times that the added complexity of using that interface isn't worth it. For example, if you are working with very large matrices the object oriented [SimpleMatrix] is almost as fast. Below are benchmarking results comparing the different interfaces in BoofCV.
== Relative Runtime Plots ==
Results are presented using relative runtime plots. These plots show how fast each interface is relative to the other. The fastest interface at each matrix size always has a value of one since it can perform the most operations per second. For more information see the Java Matrix Benchmark manual here.
Looking at the addition plot, SimpleMatrix runs at about 0.25 times the speed as using DenseMatrix64F for smaller matrices. When it processes larger matrices it runs at about 0.6 times the speed of the operations interface. This means that for larger matrices it runs relative faster. For more expensive operations (SVD, solve, matrix multiplication, etc ) it is clear that the difference in performance is not significant for matrices that are 100 by 100 or larger.
EJML is EJML using the operations interface and SEJML is EJML using SimpleMatrix.
== Test Environment ==
{| class="wikitable" |
! Date !! July 4, 2010
|-
| OS || Vista 64bit
|-
| CPU || Q9400 - 2.66 Ghz - 4 cores
|-
| JVM || Java HotSpot?(TM) 64-Bit Server VM 1.6.0_16
|-
| Benchmark || 0.7pre
|-
| EJML || 0.14pre
|}
TODO recompute these results with Equations and move the files to a local directory
== Basic Operation ==
{|
|-
| http://ejml.org/wiki/MY_IMAGES/performance/add.png || http://ejml.org/wiki/MY_IMAGES/performance/scale.png
|-
| http://ejml.org/wiki/MY_IMAGES/performance/mult.png || http://ejml.org/wiki/MY_IMAGES/performance/inv.png
|-
| http://ejml.org/wiki/MY_IMAGES/performance/det.png || http://ejml.org/wiki/MY_IMAGES/performance/tran.png
|-
|}
== Solving and Decompositions ==
{|
|-
| http://ejml.org/wiki/MY_IMAGES/performance/solveEq.png || http://ejml.org/wiki/MY_IMAGES/performance/solveOver.png
|-
| http://ejml.org/wiki/MY_IMAGES/performance/svd.png || http://ejml.org/wiki/MY_IMAGES/performance/EigSymm.png
|}
18522a3b96c1c103c2f50f714f7d1160130b155f
Unit Testing
0
24
70
2015-03-23T14:39:35Z
Peter
1
Created page with "[http://en.wikipedia.org/wiki/Unit_testing Unit testing] is an essential part of modern software development that helps ensure correctness. EJML itself makes extensive use of..."
wikitext
text/x-wiki
[http://en.wikipedia.org/wiki/Unit_testing Unit testing] is an essential part of modern software development that helps ensure correctness. EJML itself makes extensive use of unit tests as well as system level tests. EJML provides several functions are specifically designed for creating unit tests.
EjmlUnitTests and MatrixFeatures are two classes which contain useful functions for unit testing. EjmlUnitTests provides a similar interface to how JUnitTest operates. MatrixFeatures is primarily intended for extracting high level information about a matrix, but also contains several functions for testing if two matrices are equal or have specific characteristics.
The following is a brief introduction to unit testing with EJML. See the JavaDoc for a more detailed list of functions available in EjmlUnitTests and MatrixFeatures.
= Example with EjmlUnitTests =
EjmlUnitTests provides various functions for testing equality and matrix shape. Below is an example taken from an internal EJML unit test that compares the output from two different matrix decompositions with different matrix types:
<syntaxhighlight lang="java">
DenseMatrix64F Q = decomp.getQ(null);
BlockMatrix64F Qb = decompB.getQ(null,false);
EjmlUnitTests.assertEquals(Q,Qb,1e-8);
</syntaxhighlight>
In this example it checks to see if each element of the two matrices are within 1e-8 of each other. The reference EjmlUnitTests to can be avoided by invoking a "static import". If an error is found and the test fails the exact element it failed at is printed.
To maintain compatibility with different unit test libraries a generic runtime exception is thrown if a test fails.
= Example using MatrixFeatures =
MatrixFeatures is not designed with unit testing in mind, but provides many useful functions for unit tests. For example, to test for equality between two matrices:
<syntaxhighlight lang="java">
assertTrue(MatrixFeatures.isEquals(Q,Qb,1e-8));
</syntaxhighlight>
Here the JUnitTest function assertTrue() has been used. MatrixFeatures.isEquals() returns true of the two matrices are within tolerance of each other. If the test fails it doesn't print any additional information, such as which element it failed at.
One advantage of MatrixFeatures is it provides support for many more specialized tests. For example if you want to know if a matrix is orthogonal call MatrixFeatures.isOrthogonal() or to test for symmetry call MatrixFeatures.isSymmetric().
a6a79e4e0969c1aa44c2526ef67e9d5ee265b377
Random matrices, Matrix Features, and Matrix Norms
0
25
72
2015-03-23T14:48:43Z
Peter
1
Created page with "__TOC__ == Random Matrices == Random matrices and vectors are used extensively in Monti Carlo methods, simulations, and testing. There are many different types of ways in w..."
wikitext
text/x-wiki
__TOC__
== Random Matrices ==
Random matrices and vectors are used extensively in Monti Carlo methods, simulations, and testing. There are many different types of ways in which an matrix can be randomized. For example, each element can be independent variables or the rows/columns are independent orthogonal vectors. EJML provides built in methods for creating a variety types of random matrices.
Functions for creating random matrices are contained inside of the RandomMatrices class. A partial list of types of random matrices it can create includes:
* Uniform distribution in each element.
* Uniform distribution along diagonal elements.
* Uniform distribution triangular.
* Symmetric from a uniform distribution.
* Random with fixed singular values.
* Random with fixed eigen values.
* Random orthogonal.
Creating a random matrix is very simple as the code sample below shows:
<syntaxhighlight lang="java">
Random rand = new Random();
DenseMatrix A = RandomMatrices.createSymmetric(20,-2,3,rand);
</syntaxhighlight>
This will create a random 20 by 20 matrix 'A' which is symmetric and has elements whose values range from -2 to 3.
== Matrix Features ==
It is common to describe a matrix based on different features it might posses. A common example is a symmetric matrix whose elements have the following properties: a<sub>i,j</sub> == a<sub>j,i</sub>. Testing for certain features is often required at runtime to detect computational errors caused by bad inputs or round off errors.
MatrixFeatures contains a list of commonly used matrix features. In practice a matrix in a compute will almost never exactly match a feature's definition due to small round off errors. For this reason a tolerance parameter is almost always provided to test if a matrix has a feature or not. What a reasonable tolerance is is dependent on the applications.
Functions include:
* If two matrices are identical.
* If a matrix contains NaN or other uncountable numbers.
* If a matrix is symmetrix.
* If a matrix is positive definite.
* If a matrix is orthogonal.
* If a matrix is an identity matrix.
* If a matrix is the negative of another one.
* If a matrix is triangular.
* A matrix's rank and nullity.
* And several others...
Code Example:
<syntaxhighlight lang="java">
DenseMatrix A = new DenseMatrix(2,2);
A.set(0,1,2);
A.set(1,0,-2.0000000001);
if( MatrixFeatures.isSkewSymmetric(A,1e-8) )
System.out.println("Is skew symmetric!");
else
System.out.println("Should be skew symmetric!");
</syntaxhighlight>
Note that even through it is not exactly skew symmetric it will be within tolerance.
== Matrix Norms ==
Norms are a measure of the size of a vector or a matrix. One typical application is in error analysis.
Vector norms have the following properties:
# |x| > 0 if x != 0 and |0|= 0
# |a*x| = |a| |x|
# |x+y| <= |x| + |y|
Matrix norms have the following properties:
# |A| > 0 if A != 0
# | a A | = |a| |A|
# |A+B| <= |A| + |B|
# |AB| <= |A| |B|
where A and B are m by n matrices. Note that the last item in the list only applies to square matrices.
In EJML norms are computed inside the NormOps class. For some norms it will provide a fast method of computing the norm. Typically this means that it is skipping some steps that ensure numerical stability over a wider range of inputs. In applications where the input matrices or vectors are known to be well behaved the fast functions can be used.
Code Example:
<syntaxhighlight lang="java">
double v = NormOps.normF(A);
</syntaxhighlight>
which computes the Frobenius norm of 'A'.
18ea740634bacabeb6a3ffebb5c84b7cc512adce
Matrix Decompositions
0
26
75
2015-03-24T05:29:50Z
Peter
1
Created page with "#summary How to perform common matrix decompositions in EJML = Introduction = Matrix decomposition are used to reduce a matrix to a more simplic format which can be easily s..."
wikitext
text/x-wiki
#summary How to perform common matrix decompositions in EJML
= Introduction =
Matrix decomposition are used to reduce a matrix to a more simplic format which can be easily solved and used to extract characteristics from. Below is a list of matrix decompositions and data structures there are implementations for.
{| class="wikitable"
! Decomposition !! DenseMatrix64F !! BlockMatrix64F !! CDenseMatrix64F
|-
| LU || Yes || || Yes
|-
| Cholesky L`*`L<sup>T</sup> and R<sup>T</sup>`*`R || Yes || Yes || Yes
|-
| Cholesky L`*`D`*`L<sup>T</sup> || Yes || ||
|-
| QR || Yes || Yes || Yes
|-
| QR Column Pivot || Yes || ||
|-
| Singular Value Decomposition (SVD) || Yes || ||
|-
| Generalized Eigen Value || Yes || ||
|-
| Symmetric Eigen Value || Yes || Yes ||
|-
| Bidiagonal || Yes || ||
|-
| Tridiagonal || Yes || Yes ||
|-
| Hessenberg || Yes || ||
|}
= Solving Using Matrix Decompositions =
Decompositions such as LU and QR are used to solve a linear system. A common mistake in EJML is to directly decompose the matrix instead of using a LinearSolver. LinearSolvers simplify the process of solving a linear system, are very fast, and will automatically be updated as new algorithms are added. It is recommended that you use them whenever possible.
For more information on LinearSolvers see the wikipage at [[Solving Linear Systems]].
= SimpleMatrix =
SimpleMatrix has easy to an use interface built in for SVD and EVD:
<syntaxhighlight lang="java">
SimpleSVD svd = A.svd();
SimpleEVD evd = A.eig();
SimpleMatrix U = svd.getU();
</syntaxhighlight>
where A is a SimpleMatrix.
As with most operators in SimpleMatrix, it is possible to chain a decompositions with other commands. For instance, to print the singular values in a matrix:
<syntaxhighlight lang="java">
A.svd().getW().extractDiag().transpose().print();
</syntaxhighlight>
Other decompositions can be performed by using accessing the internal DenseMatrix64F and using the decompositions shown in the following section below. The following is an example of applying a Cholesky decomposition.
<syntaxhighlight lang="java">
CholeskyDecomposition<DenseMatrix64F> chol = DecompositionFactory.chol(A.numRows(),true);
if( !chol.decompose(A.getMatrix()))
throw new RuntimeException("Cholesky failed!");
SimpleMatrix L = SimpleMatrix.wrap(chol.getT(null));
</syntaxhighlight>
= DecompositionFactory =
The best way to create a matrix decomposition is by using DecompositionFactory. Directly instantiating a decomposition is discouraged because of the added complexity. DecompositionFactory is updated as new and faster algorithms are added.
<syntaxhighlight lang="java">
public interface DecompositionInterface<T extends Matrix64F> {
/**
* Computes the decomposition of the input matrix. Depending on the implementation
* the input matrix might be stored internally or modified. If it is modified then
* the function {@link #inputModified()} will return true and the matrix should not be
* modified until the decomposition is no longer needed.
*
* @param orig The matrix which is being decomposed. Modification is implementation dependent.
* @return Returns if it was able to decompose the matrix.
*/
public boolean decompose( T orig );
/**
* Is the input matrix to {@link #decompose(org.ejml.data.DenseMatrix64F)} is modified during
* the decomposition process.
*
* @return true if the input matrix to decompose() is modified.
*/
public boolean inputModified();
}
</syntaxhighlight>
Most decompositions in EJML implement DecompositionInterface. To decompose "A" matrix simply call decompose(A). It returns true if there are no error while decomposing and false otherwise. While in general you can trust the results if true is returned some algorithms can have faults that are not reported. This is true for all linear algebra libraries.
To minimize memory usage, most decompositions will modify the original matrix passed into decompose(). Call inputModified() to determine if the input matrix is modified or not. If it is modified, and you do not wish it to be modified, just pass in a copy of the original instead.
Below is an example of how to compute the SVD of a matrix:
<syntaxhighlight lang="java">
void decompositionExample( DenseMatrix64F A ) {
SingularValueDecomposition<DenseMatrix64F> svd = DecompositionFactory.svd(A.numRows,A.numCols);
if( !svd.decompose(A) )
throw new RuntimeException("Decomposition failed");
DenseMatrix64F U = svd.getU(null,false);
DenseMatrix64F W = svd.getW(null);
DenseMatrix64F V = svd.getV(null,false);
}
</syntaxhighlight>
Note how it checks the returned value from decompose.
In addition DecompositionFactory provides functions for computing the quality of a decomposition. Being able measure the decomposition's quality is an important way to validate its correctness. It works by "reconstructing" the original matrix then computing the difference between the reconstruction and the original. Smaller the quality is the better the decomposition is. With an ideal value of being around 1e-15 in most cases.
<syntaxhighlight lang="java">
if( DecompositionFactory.quality(A,svd) > 1e-3 )
throw new RuntimeException("Bad decomposition");
</syntaxhighlight>
List of functions in DecompositionFactory
{| class="wikitable"
! Decomposition !! Code
|-
| LU || DecompositionFactory.lu()
|-
| QR || DecompositionFactory.qr()
|-
| QRP || DecompositionFactory.qrp()
|-
| Cholesky || DecompositionFactory.chol()
|-
| Cholesky LDL || DecompositionFactory.cholLDL()
|-
| SVD || DecompositionFactory.svd()
|-
| Eigen || DecompositionFactory.eig()
|}
= Helper Functions for SVD and Eigen =
Two classes SingularOps and EigenOps have been provided for extracting useful information from these decompositions or to provide highly specialized ways of computing the decompositions. Below is a list of more common uses of these functions:
SingularOps
*descendingOrder()
**In EJML the ordering of the returned singular values is not in general guaranteed. This function will reorder the U,W,V matrices such that the singular values are in the standard largest to smallest ordering.
*nullSpace()
**Computes the null space from the provided decomposition.
*rank()
**Returns the matrix's rank.
*nullity()
**Returns the matrix's nullity.
EigenOps
*computeEigenValue()
**Given an eigen vector compute its eigenvalue.
*computeEigenVector()
**Given an eigenvalue compute its eigenvector.
*boundLargestEigenValue()
**Returns a lower and upper bound for the largest eigenvalue.
*createMatrixD() and createMatrixV()
**Reformats the results such that two matrices (D and V) contain the eigenvalues and eigenvectors respectively. This is similar to the format used by other libraries such as Jama.
c1153c41324c6a7e58861d73ece4921bf5e4ce59
Example Complex Math
0
27
76
2015-03-24T14:40:49Z
Peter
1
Created page with " The Complex64F data type stores a single complex number. Inside the ComplexMath64F class are functions for performing standard math operations on Complex64F, such as additio..."
wikitext
text/x-wiki
The Complex64F data type stores a single complex number. Inside the ComplexMath64F class are functions for performing standard math operations on Complex64F, such as addition and division. The example below demonstrates how to perform these operations.
Code on GitHub:
[https://github.com/lessthanoptimal/ejml/blob/master/examples/src/org/ejml/example/ExampleComplexMath.java ExampleComplexMath]
== Example Code ==
<syntaxhighlight lang="java">
/**
* Demonstration of different operations that can be performed on complex numbers.
*
* @author Peter Abeles
*/
public class ExampleComplexMath {
public static void main( String []args ) {
Complex64F a = new Complex64F(1,2);
Complex64F b = new Complex64F(-1,-0.6);
Complex64F c = new Complex64F();
ComplexPolar64F polarC = new ComplexPolar64F();
System.out.println("a = "+a);
System.out.println("b = "+b);
System.out.println("------------------");
ComplexMath64F.plus(a, b, c);
System.out.println("a + b = "+c);
ComplexMath64F.minus(a, b, c);
System.out.println("a - b = "+c);
ComplexMath64F.multiply(a, b, c);
System.out.println("a * b = "+c);
ComplexMath64F.divide(a, b, c);
System.out.println("a / b = "+c);
System.out.println("------------------");
ComplexPolar64F polarA = new ComplexPolar64F();
ComplexMath64F.convert(a, polarA);
System.out.println("polar notation of a = "+polarA);
ComplexMath64F.pow(polarA, 3, polarC);
System.out.println("a ** 3 = "+polarC);
ComplexMath64F.convert(polarC, c);
System.out.println("a ** 3 = "+c);
}
}
</syntaxhighlight>
e7298227dfd7b319bfb682ca42947ee79bb14d9a
Download
0
6
77
18
2015-03-24T15:40:56Z
Peter
1
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.26/ EJML Downloads]
== Gradle and Maven ==
EJML is broken up into several packages; 'core','dense64','denseC64','simple', and 'equation'. When including EJML in your project using Gradle or Maven you can reference them individually or simply reference the "all" package, which is dependent on every package.
Gradle:
<syntaxhighlight lang="groovy">
compile group: 'org.ejml', name: 'all', version: '0.27-SNAPSHOT'
</syntaxhighlight>
Maven:
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>all</artifactId>
<version>0.27-SNAPSHOT</version>
</dependency>
</syntaxhighlight>
1eacded1580a614a9401923865bf48b8ea837dc1
95
77
2015-03-31T16:04:43Z
Peter
1
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
Current status of developmental code:
[https://travis-ci.org/lessthanoptimal/ejml https://api.travis-ci.org/lessthanoptimal/ejml.png]
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.26/ EJML Downloads]
== Gradle and Maven ==
EJML is broken up into several packages; 'core','dense64','denseC64','simple', and 'equation'. When including EJML in your project using Gradle or Maven you can reference them individually or simply reference the "all" package, which is dependent on every package.
Gradle:
<syntaxhighlight lang="groovy">
compile group: 'org.ejml', name: 'all', version: '0.27-SNAPSHOT'
</syntaxhighlight>
Maven:
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>all</artifactId>
<version>0.27-SNAPSHOT</version>
</dependency>
</syntaxhighlight>
14a2364b78ac353c4a40ae5f9bc2da1dfc6b2170
Procedural
0
28
78
2015-03-25T14:52:51Z
Peter
1
Created page with "The procedural interface in EJML provides access to all of its capabilities and provides much more control over which algorithms are used and when memory is created. The down..."
wikitext
text/x-wiki
The procedural interface in EJML provides access to all of its capabilities and provides much more control over which algorithms are used and when memory is created. The downside to this increased control is the added difficulty in programming, kinda resembles writing in asembly. Code can be made very efficient, but managing all the temporary data structures can be tedious.
The procedural API processes [[DenseMatrix]] matrix types. For real numbers it takes in [http://ejml.org/javadoc/org/ejml/data/DenseMatrix64F.html DenseMatrix64F] and for complex [http://ejml.org/javadoc/org/ejml/data/CDenseMatrix64F.html CDenseMatrix64F]. These classes themselves only provide very basic operators for accessing elements within a matrix and well as its size and shape. More complex functions for manipulating DenseMatrix are available in various Ops classes, described below. Internally they store the matrix in a single array using a row-major format.
While it has a sharper learning curve and takes more time to learn it is the most powerful API.
* [[Manual#Example Code|List of code examples]]
= Accessors =
* get( row , col )
* set( row , col , value )
** Returns or sets the value of an element at the specified row and column.
* unsafe_get( row , col )
* unsafe_set( row , col , value )
** Faster version of get() or set() that does not perform bounds checking.
* get( index )
* set( index )
** Returns or sets the value of an element at the specified index. Useful for vectors and element-wise operations.
* iterator( boolean rowMajor, int minRow, int minCol, int maxRow, int maxCol )
** An iterator that iterates through the sub-matrix by row or by column.
= Operations =
Several "Ops" classes provide functions for manipulating DenseMatrix64F and most are contained inside of the org.ejml.ops package.
* CommonOps
** Provides the most common matrix operations.
* EigenOps
** Provides operations related to eigenvalues and eigenvectors.
* MatrixFeatures
** Used to compute various features related to a matrix.
* NormOps
** Operations for computing different matrix norms.
* SingularOps
** Operations related to singular value decompositions.
* SpecializedOps
** Grab bag for operations which do not fit in anywhere else.
* RandomMatrices
** Used to create different types of random matrices.
= Tips for Avoiding "new" =
TODO fill this out.
* reshape matrices instead of declaring new ones
* not all functions recycle memory
0804c9b87999f07c7782ce770efdb52ec88fe7bf
80
78
2015-03-25T15:04:09Z
Peter
1
wikitext
text/x-wiki
The procedural interface in EJML provides access to all of its capabilities and provides much more control over which algorithms are used and when memory is created. The downside to this increased control is the added difficulty in programming, kinda resembles writing in assembly. Code can be made very efficient, but managing all the temporary data structures can be tedious.
The procedural API processes [[DenseMatrix]] matrix types. There are functions for real matrices [http://ejml.org/javadoc/org/ejml/data/DenseMatrix64F.html DenseMatrix64F], complex [http://ejml.org/javadoc/org/ejml/data/CDenseMatrix64F.html CDenseMatrix64F], and fixed sized (FixedMatrix2x2_64F, ..., FixedMatrix6x6_64F). These classes themselves only provide very basic operators for accessing elements within a matrix and well as its size and shape. The complete set of functions for manipulating DenseMatrix are available in various Ops classes, described below.
Internally all dense matrix classes store the matrix in a single array using a row-major format. Fixed sized matrices and vectors unroll the matrix, where each element is a matrix parameter. This can allow for much faster access and array overhead. However if fixed sized matrices get too large then performance starts to drop due to what I suppose is CPU caching issues.
While it has a sharper learning curve and takes more time to learn it is the most powerful API.
* [[Manual#Example Code|List of code examples]]
= Accessors =
* get( row , col )
* set( row , col , value )
** Returns or sets the value of an element at the specified row and column.
* unsafe_get( row , col )
* unsafe_set( row , col , value )
** Faster version of get() or set() that does not perform bounds checking.
* get( index )
* set( index )
** Returns or sets the value of an element at the specified index. Useful for vectors and element-wise operations.
* iterator( boolean rowMajor, int minRow, int minCol, int maxRow, int maxCol )
** An iterator that iterates through the sub-matrix by row or by column.
= Operations =
Several "Ops" classes provide functions for manipulating DenseMatrix64F and most are contained inside of the org.ejml.ops package. The list below is provided for real matrices. For complex matrices add a "C" in front of the name, e.g. CCommonOps.
* CommonOps
** Provides the most common matrix operations.
* EigenOps
** Provides operations related to eigenvalues and eigenvectors.
* MatrixFeatures
** Used to compute various features related to a matrix.
* NormOps
** Operations for computing different matrix norms.
* SingularOps
** Operations related to singular value decompositions.
* SpecializedOps
** Grab bag for operations which do not fit in anywhere else.
* RandomMatrices
** Used to create different types of random matrices.
For fixed sized matrices FixedOpsN is provided, where N = 2 to 6. FixedOpsN is similar in functionality to CommonOps.
a9c3fed53716957e269eaf705ac037501c17f80b
82
80
2015-03-25T15:08:57Z
Peter
1
wikitext
text/x-wiki
The procedural interface in EJML provides access to all of its capabilities and provides much more control over which algorithms are used and when memory is created. The downside to this increased control is the added difficulty in programming, kinda resembles writing in assembly. Code can be made very efficient, but managing all the temporary data structures can be tedious.
The procedural API processes DenseMatrix matrix types. A complete list of these data types is listed [[#DenseMatrix Types|below]]. These classes themselves only provide very basic operators for accessing elements within a matrix and well as its size and shape. The complete set of functions for manipulating DenseMatrix are available in various Ops classes, described below.
Internally all dense matrix classes store the matrix in a single array using a row-major format. Fixed sized matrices and vectors unroll the matrix, where each element is a matrix parameter. This can allow for much faster access and array overhead. However if fixed sized matrices get too large then performance starts to drop due to what I suppose is CPU caching issues.
While it has a sharper learning curve and takes more time to learn it is the most powerful API.
* [[Manual#Example Code|List of code examples]]
= DenseMatrix Types =
{| style="wikitable"
! Name !! Description
|-
| [http://ejml.org/javadoc/org/ejml/data/DenseMatrix64F.html DenseMatrix64F] || Dense Real Matrix
|-
| [http://ejml.org/javadoc/org/ejml/data/CDenseMatrix64F.html CDenseMatrix64F] || Dense Complex Matrix
|-
| [http://ejml.org/javadoc/org/ejml/data/FixedMatrix4x4_64F.html FixedMatrixNxN_64F] || Fixed Size Dense Real Matrix
|-
| [http://ejml.org/javadoc/org/ejml/data/FixedMatrix4_64F.html FixedMatrixN_64F] || Fixed Size Dense Real Vector
|}
= Accessors =
* get( row , col )
* set( row , col , value )
** Returns or sets the value of an element at the specified row and column.
* unsafe_get( row , col )
* unsafe_set( row , col , value )
** Faster version of get() or set() that does not perform bounds checking.
* get( index )
* set( index )
** Returns or sets the value of an element at the specified index. Useful for vectors and element-wise operations.
* iterator( boolean rowMajor, int minRow, int minCol, int maxRow, int maxCol )
** An iterator that iterates through the sub-matrix by row or by column.
= Operations =
Several "Ops" classes provide functions for manipulating DenseMatrix64F and most are contained inside of the org.ejml.ops package. The list below is provided for real matrices. For complex matrices add a "C" in front of the name, e.g. CCommonOps.
* CommonOps
** Provides the most common matrix operations.
* EigenOps
** Provides operations related to eigenvalues and eigenvectors.
* MatrixFeatures
** Used to compute various features related to a matrix.
* NormOps
** Operations for computing different matrix norms.
* SingularOps
** Operations related to singular value decompositions.
* SpecializedOps
** Grab bag for operations which do not fit in anywhere else.
* RandomMatrices
** Used to create different types of random matrices.
For fixed sized matrices FixedOpsN is provided, where N = 2 to 6. FixedOpsN is similar in functionality to CommonOps.
87c8654422f6a10c2cf2e0f6726b7d01b592347a
92
82
2015-03-26T14:57:44Z
Peter
1
wikitext
text/x-wiki
The procedural interface in EJML provides access to all of its capabilities and provides much more control over which algorithms are used and when memory is created. The downside to this increased control is the added difficulty in programming, kinda resembles writing in assembly. Code can be made very efficient, but managing all the temporary data structures can be tedious.
The procedural API processes DenseMatrix matrix types. A complete list of these data types is listed [[#DenseMatrix Types|below]]. These classes themselves only provide very basic operators for accessing elements within a matrix and well as its size and shape. The complete set of functions for manipulating DenseMatrix are available in various Ops classes, described below.
Internally all dense matrix classes store the matrix in a single array using a row-major format. Fixed sized matrices and vectors unroll the matrix, where each element is a matrix parameter. This can allow for much faster access and array overhead. However if fixed sized matrices get too large then performance starts to drop due to what I suppose is CPU caching issues.
While it has a sharper learning curve and takes more time to learn it is the most powerful API.
* [[Manual#Example Code|List of code examples]]
= DenseMatrix Types =
{| style="wikitable"
! Name !! Description
|-
| {{DataDocLink|DenseMatrix64F}} || Dense Real Matrix
|-
| {{DataDocLink|CDenseMatrix64F}} || Dense Complex Matrix
|-
| {{DocLink|org/ejml/data/FixedMatrix3x3_64F.html|FixedMatrixNxN_F64F}} || Fixed Size Dense Real Matrix
|-
| {{DocLink|org/ejml/data/FixedMatrix3_64F.html|FixedMatrixN_F64F}} || Fixed Size Dense Real Vector
|}
= Accessors =
* get( row , col )
* set( row , col , value )
** Returns or sets the value of an element at the specified row and column.
* unsafe_get( row , col )
* unsafe_set( row , col , value )
** Faster version of get() or set() that does not perform bounds checking.
* get( index )
* set( index )
** Returns or sets the value of an element at the specified index. Useful for vectors and element-wise operations.
* iterator( boolean rowMajor, int minRow, int minCol, int maxRow, int maxCol )
** An iterator that iterates through the sub-matrix by row or by column.
= Operations =
Several "Ops" classes provide functions for manipulating DenseMatrix64F and most are contained inside of the org.ejml.ops package. The list below is provided for real matrices. For complex matrices add a "C" in front of the name, e.g. CCommonOps.
; {{OpsDocLink|CommonOps}} : Provides the most common matrix operations.
; {{OpsDocLink|EigenOps}} : Provides operations related to eigenvalues and eigenvectors.
; {{OpsDocLink|MatrixFeatures}} : Used to compute various features related to a matrix.
; {{OpsDocLink|NormOps}} : Operations for computing different matrix norms.
; {{OpsDocLink|SingularOps}} : Operations related to singular value decompositions.
; {{OpsDocLink|SpecializedOps}} : Grab bag for operations which do not fit in anywhere else.
; {{OpsDocLink|RandomMatrices}} : Used to create different types of random matrices.
For fixed sized matrices FixedOpsN is provided, where N = 2 to 6. FixedOpsN is similar in functionality to CommonOps.
63a4dbc63c66e7cd80d07f515af8aa4de786df27
SimpleMatrix
0
30
83
2015-03-26T06:31:59Z
Peter
1
Created page with " SimpleMatrix is an interface that provides an easy to use object oriented way of doing linear algebra. It is a wrapper around the procedural interface in EJML and was origin..."
wikitext
text/x-wiki
SimpleMatrix is an interface that provides an easy to use object oriented way of doing linear algebra. It is a wrapper around the procedural interface in EJML and was originally inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. When using SimpleMatrix, memory management is automatically handled and it allows commands to be chained together using a flow paradigm. Switching between SimpleMatrix and the [[Procedural]] interface is easy, enabling the two programming paradigms to be mixed in the same code base.
When invoking a function in SimpleMatrix none of the input matrices, including the 'this' matrix, are modified during the function call. There is a slight performance hit when using SimpleMatrix and less control over memory management. See [[Performance]] for a comparison of runtime performance of the different interfaces.
Below is a brief overview of SimpleMatrix concepts.
== Chaining Operations ==
When using SimpleMatrix operations can be chained together. Chained operations are often easier to read and write.
<syntaxhighlight lang="java">
public SimpleMatrix process( SimpleMatrix A , SimpleMatrix B ) {
return A.transpose().mult(B).scale(12).invert();
}
</syntaxhighlight>
is equivalent to the following Matlab code:
<syntaxhighlight lang="java">C = inv((A' * B)*12.0)</syntaxhighlight>
== Working with DenseMatrix64F ==
To convert a [DenseMatrix64F DenseMatrix64F] into a SimpleMatrix call the wrap() function. Then to get access to the internal DenseMatrix64F inside of a SimpleMatrix call getMatrix().
<syntaxhighlight lang="java">
public DenseMatrix64F compute( DenseMatrix64F A , DenseMatrix64F B ) {
SimpleMatrix A_ = SimpleMatrix.wrap(A);
SimpleMatrix B_ = SimpleMatrix.wrap(B);
return A_.mult(B_).getMatrix();
}
</syntaxhighlight>
A DenseMatrix64F can also be passed into the SimpleMatrix constructor, but this will copy the input matrix. Unlike with when wrap is used, changed to the new SimpleMatrix will not modify the original DenseMatrix64F.
== Accessors ==
*get( row , col )
*set( row , col , value )
** Returns or sets the value of an element at the specified row and column.
*get( index )
*set( index )
** Returns or sets the value of an element at the specified index. Useful for vectors and element-wise operations.
*iterator( boolean rowMajor, int minRow, int minCol, int maxRow, int maxCol )
** An iterator that iterates through the sub-matrix by row or by column.
== Submatrices ==
A submatrix is a matrix whose elements are a subset of another matrix. Several different functions are provided for manipulating submatrices.
; extractMatrix : Extracts a rectangular submatrix from the original matrix.
; extractDiag : Creates a column vector containing just the diagonal elements of the matrix.
; extractVector : Extracts either an entire row or column.
; insertIntoThis : Inserts the passed in matrix into 'this' matrix.
; combine : Creates a now matrix that is a combination of the two inputs.
== Decompositions ==
Simplified ways to use popular matrix decompositions is provided. These decompositions provide fewer choices than the equivalent for DenseMatrix64F, but should meet most people needs.
; svd : Computes the singular value decomposition of 'this' matrix
; eig : Computes the eigen value decomposition of 'this' matrix
Direct access to other decompositions (e.g. QR and Cholesky) is not provided in SimpleMatrix because solve() and inv() is provided instead. In more advanced applications use the operator interface instead to compute those decompositions.
== Solve and Invert ==
; solve : Computes the solution to the set of linear equations
; inv : Computes the inverse of a square matrix
; pinv : Computes the pseudo-inverse for an arbitrary matrix
See [[Solving Linear Systems]] for more details on solving systems of equations.
== Other Functions ==
SimpleMatrix provides many other functions. For a complete list see the JavaDoc for [http://ejml.org/javadoc/org/ejml/simple/SimpleBase.html SimpleBase] and [http://ejml.org/javadoc/org/ejml/simple/SimpleMatrix.html SimpleMatrix]. Note that SimpleMatrix extends SimpleBase.
c7d10d2628cf1d293c048e525c32affdf77f425d
93
83
2015-03-26T15:02:04Z
Peter
1
wikitext
text/x-wiki
SimpleMatrix is an interface that provides an easy to use object oriented way of doing linear algebra. It is a wrapper around the procedural interface in EJML and was originally inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. When using SimpleMatrix, memory management is automatically handled and it allows commands to be chained together using a flow paradigm. Switching between SimpleMatrix and the [[Procedural]] interface is easy, enabling the two programming paradigms to be mixed in the same code base.
When invoking a function in SimpleMatrix none of the input matrices, including the 'this' matrix, are modified during the function call. There is a slight performance hit when using SimpleMatrix and less control over memory management. See [[Performance]] for a comparison of runtime performance of the different interfaces.
Below is a brief overview of SimpleMatrix concepts.
== Chaining Operations ==
When using SimpleMatrix operations can be chained together. Chained operations are often easier to read and write.
<syntaxhighlight lang="java">
public SimpleMatrix process( SimpleMatrix A , SimpleMatrix B ) {
return A.transpose().mult(B).scale(12).invert();
}
</syntaxhighlight>
is equivalent to the following Matlab code:
<syntaxhighlight lang="java">C = inv((A' * B)*12.0)</syntaxhighlight>
== Working with DenseMatrix64F ==
To convert a [DenseMatrix64F DenseMatrix64F] into a SimpleMatrix call the wrap() function. Then to get access to the internal DenseMatrix64F inside of a SimpleMatrix call getMatrix().
<syntaxhighlight lang="java">
public DenseMatrix64F compute( DenseMatrix64F A , DenseMatrix64F B ) {
SimpleMatrix A_ = SimpleMatrix.wrap(A);
SimpleMatrix B_ = SimpleMatrix.wrap(B);
return A_.mult(B_).getMatrix();
}
</syntaxhighlight>
A {{DataDocLink|DenseMatrix64F}} can also be passed into the SimpleMatrix constructor, but this will copy the input matrix. Unlike with when wrap is used, changed to the new SimpleMatrix will not modify the original DenseMatrix64F.
== Accessors ==
*get( row , col )
*set( row , col , value )
** Returns or sets the value of an element at the specified row and column.
*get( index )
*set( index )
** Returns or sets the value of an element at the specified index. Useful for vectors and element-wise operations.
*iterator( boolean rowMajor, int minRow, int minCol, int maxRow, int maxCol )
** An iterator that iterates through the sub-matrix by row or by column.
== Submatrices ==
A submatrix is a matrix whose elements are a subset of another matrix. Several different functions are provided for manipulating submatrices.
; extractMatrix : Extracts a rectangular submatrix from the original matrix.
; extractDiag : Creates a column vector containing just the diagonal elements of the matrix.
; extractVector : Extracts either an entire row or column.
; insertIntoThis : Inserts the passed in matrix into 'this' matrix.
; combine : Creates a now matrix that is a combination of the two inputs.
== Decompositions ==
Simplified ways to use popular matrix decompositions is provided. These decompositions provide fewer choices than the equivalent for DenseMatrix64F, but should meet most people needs.
; svd : Computes the singular value decomposition of 'this' matrix
; eig : Computes the eigen value decomposition of 'this' matrix
Direct access to other decompositions (e.g. QR and Cholesky) is not provided in SimpleMatrix because solve() and inv() is provided instead. In more advanced applications use the operator interface instead to compute those decompositions.
== Solve and Invert ==
; solve : Computes the solution to the set of linear equations
; inv : Computes the inverse of a square matrix
; pinv : Computes the pseudo-inverse for an arbitrary matrix
See [[Solving Linear Systems]] for more details on solving systems of equations.
== Other Functions ==
SimpleMatrix provides many other functions. For a complete list see the JavaDoc for {{DocLink|org/ejml/simple/SimpleBase.html|SimpleBase}} and {{DocLink|org/ejml/simple/SimpleMatrix.html|SimpleMatrix}. Note that SimpleMatrix extends SimpleBase.
== Adding Functionality ==
You can turn SimpleMatrix into your own data structure and extend its capabilities. See the [[Example_Customizing_SimpleMatrix|example on customizing SimpleMatrix]] for the details.
22a758beb83b2248b52cb3c57e265660f8b3a40a
94
93
2015-03-26T15:02:40Z
Peter
1
wikitext
text/x-wiki
SimpleMatrix is an interface that provides an easy to use object oriented way of doing linear algebra. It is a wrapper around the procedural interface in EJML and was originally inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. When using SimpleMatrix, memory management is automatically handled and it allows commands to be chained together using a flow paradigm. Switching between SimpleMatrix and the [[Procedural]] interface is easy, enabling the two programming paradigms to be mixed in the same code base.
When invoking a function in SimpleMatrix none of the input matrices, including the 'this' matrix, are modified during the function call. There is a slight performance hit when using SimpleMatrix and less control over memory management. See [[Performance]] for a comparison of runtime performance of the different interfaces.
Below is a brief overview of SimpleMatrix concepts.
== Chaining Operations ==
When using SimpleMatrix operations can be chained together. Chained operations are often easier to read and write.
<syntaxhighlight lang="java">
public SimpleMatrix process( SimpleMatrix A , SimpleMatrix B ) {
return A.transpose().mult(B).scale(12).invert();
}
</syntaxhighlight>
is equivalent to the following Matlab code:
<syntaxhighlight lang="java">C = inv((A' * B)*12.0)</syntaxhighlight>
== Working with DenseMatrix64F ==
To convert a [DenseMatrix64F DenseMatrix64F] into a SimpleMatrix call the wrap() function. Then to get access to the internal DenseMatrix64F inside of a SimpleMatrix call getMatrix().
<syntaxhighlight lang="java">
public DenseMatrix64F compute( DenseMatrix64F A , DenseMatrix64F B ) {
SimpleMatrix A_ = SimpleMatrix.wrap(A);
SimpleMatrix B_ = SimpleMatrix.wrap(B);
return A_.mult(B_).getMatrix();
}
</syntaxhighlight>
A {{DataDocLink|DenseMatrix64F}} can also be passed into the SimpleMatrix constructor, but this will copy the input matrix. Unlike with when wrap is used, changed to the new SimpleMatrix will not modify the original DenseMatrix64F.
== Accessors ==
*get( row , col )
*set( row , col , value )
** Returns or sets the value of an element at the specified row and column.
*get( index )
*set( index )
** Returns or sets the value of an element at the specified index. Useful for vectors and element-wise operations.
*iterator( boolean rowMajor, int minRow, int minCol, int maxRow, int maxCol )
** An iterator that iterates through the sub-matrix by row or by column.
== Submatrices ==
A submatrix is a matrix whose elements are a subset of another matrix. Several different functions are provided for manipulating submatrices.
; extractMatrix : Extracts a rectangular submatrix from the original matrix.
; extractDiag : Creates a column vector containing just the diagonal elements of the matrix.
; extractVector : Extracts either an entire row or column.
; insertIntoThis : Inserts the passed in matrix into 'this' matrix.
; combine : Creates a now matrix that is a combination of the two inputs.
== Decompositions ==
Simplified ways to use popular matrix decompositions is provided. These decompositions provide fewer choices than the equivalent for DenseMatrix64F, but should meet most people needs.
; svd : Computes the singular value decomposition of 'this' matrix
; eig : Computes the eigen value decomposition of 'this' matrix
Direct access to other decompositions (e.g. QR and Cholesky) is not provided in SimpleMatrix because solve() and inv() is provided instead. In more advanced applications use the operator interface instead to compute those decompositions.
== Solve and Invert ==
; solve : Computes the solution to the set of linear equations
; inv : Computes the inverse of a square matrix
; pinv : Computes the pseudo-inverse for an arbitrary matrix
See [[Solving Linear Systems]] for more details on solving systems of equations.
== Other Functions ==
SimpleMatrix provides many other functions. For a complete list see the JavaDoc for {{DocLink|org/ejml/simple/SimpleBase.html|SimpleBase}} and {{DocLink|org/ejml/simple/SimpleMatrix.html|SimpleMatrix}}. Note that SimpleMatrix extends SimpleBase.
== Adding Functionality ==
You can turn SimpleMatrix into your own data structure and extend its capabilities. See the [[Example_Customizing_SimpleMatrix|example on customizing SimpleMatrix]] for the details.
f8e8982c38813dd62f60e4c91708449df485f3e7
Template:DocLink
10
31
84
2015-03-26T12:00:04Z
Peter
1
Created page with "[http://ejml.org/javadoc/{{{1}}} {{{2}}}]"
wikitext
text/x-wiki
[http://ejml.org/javadoc/{{{1}}} {{{2}}}]
90178a388c8c22e613b6bffeec45c1934e632342
Template:OpsDocLink
10
32
85
2015-03-26T12:01:43Z
Peter
1
Created page with "{{DocLink|http://ejml.org/javadoc/org/ejml/ops/{{{1}}}.html|{{{1}}}} }}"
wikitext
text/x-wiki
{{DocLink|http://ejml.org/javadoc/org/ejml/ops/{{{1}}}.html|{{{1}}}} }}
5421716f9a81adb87ffd8f795aa210ae27fb2832
86
85
2015-03-26T12:03:00Z
Peter
1
wikitext
text/x-wiki
{{DocLink|org/ejml/ops/{{{1}}.html|{{{1}}} }}
06484e7247bc404a894e7a7f0dd89ef6f07129ec
87
86
2015-03-26T12:03:45Z
Peter
1
wikitext
text/x-wiki
{{DocLink|org/ejml/ops/{{{1}}}.html|{{{1}}} }}
9cab2faa20cb833b3eb452d8d58bc5f65fa26aab
Capabilities
0
33
88
2015-03-26T12:06:17Z
Peter
1
Created page with "{| style="wikitable" ! !! Dense Real !! Fixed real !! Dense Complex |- | Basic Arithmetic || X || X || X |- | Element-Wise Ops || X || X || X |- | Determinant || X || X || X..."
wikitext
text/x-wiki
{| style="wikitable"
! !! Dense Real !! Fixed real !! Dense Complex
|-
| Basic Arithmetic || X || X || X
|-
| Element-Wise Ops || X || X || X
|-
| Determinant || X || X || X
|-
| Inverse || X || X || X
|-
| Solve m=n || X || X || X
|-
| Solve m>n || X || X || X
|-
| LU || X || X || X
|-
| LUP || X || X || X
|-
| Cholesky || X || X || X
|-
| QR || X || X || X
|-
| QRP || X || X || X
|-
| SVD || X || X || X
|-
| Eigen || X || X || X
|}
The above table summarizes at a high level the capabilities available by matrix type. To see a complete list of featurs check out the following classes and factories. Note that the capabilities also vary by which interface you use. See interface specific documentation for that information. Procedural interface supports everything.
{| style="wikitable"
! Dense Real !! Fixed real !! Dense Complex
|-
| {{OpsDocLink|CommonOps}} || {{DocLink|org/ejml/alg/fixed/FixedOps3.html|FixedOps}} || {{OpsDocLink|CCommonOps}}
|-
| EigenOps || ||
|-
| MatrixFeatures || || CMatrixFeatures
|-
| MatrixVisualization || ||
|-
| NormOps || || CNormOps
|-
| RandomMatrices || || CRandomMatrices
|-
| SingularOps || ||
|-
| SpecializedOps || || CSpecializedOps
|}
f9364fcf717040d56541ead8e1a9754ba06b6fa2
89
88
2015-03-26T12:11:46Z
Peter
1
wikitext
text/x-wiki
{| class="wikitable"
! !! Dense Real !! Fixed real !! Dense Complex
|-
| Basic Arithmetic || X || X || X
|-
| Element-Wise Ops || X || X || X
|-
| Transpose || X || X || X
|-
| Determinant || X || X || X
|-
| Norm || X || || X
|-
| Inverse || X || X || X
|-
| Solve m=n || X || || X
|-
| Solve m>n || X || || X
|-
| LU || X || || X
|-
| Cholesky || X || || X
|-
| QR || X || || X
|-
| QRP || X || ||
|-
| SVD || X || ||
|-
| Eigen || X || ||
|}
The above table summarizes at a high level the capabilities available by matrix type. To see a complete list of featurs check out the following classes and factories. Note that the capabilities also vary by which interface you use. See interface specific documentation for that information. Procedural interface supports everything.
{| class="wikitable"
! Dense Real !! Fixed real !! Dense Complex
|-
| {{OpsDocLink|CommonOps}} || {{DocLink|org/ejml/alg/fixed/FixedOps3.html|FixedOps}} || {{OpsDocLink|CCommonOps}}
|-
| {{OpsDocLink|EigenOps}} || ||
|-
| {{OpsDocLink|MatrixFeatures}} || || {{OpsDocLink|CMatrixFeatures}}
|-
| {{OpsDocLink|MatrixVisualization}} || ||
|-
| {{OpsDocLink|NormOps}} || || {{OpsDocLink|CNormOps}}
|-
| {{OpsDocLink|RandomMatrices}} || || {{OpsDocLink|CRandomMatrices}}
|-
| {{OpsDocLink|SingularOps}} || ||
|-
| {{OpsDocLink|SpecializedOps}} || || {{OpsDocLink|CSpecializedOps}}
|}
f288403575d36849391ec52f0fba6610b0237090
90
89
2015-03-26T12:13:32Z
Peter
1
wikitext
text/x-wiki
= Linear Algebra Capabilities =
{| class="wikitable"
! !! Dense Real !! Fixed real !! Dense Complex
|-
| Basic Arithmetic || X || X || X
|-
| Element-Wise Ops || X || X || X
|-
| Transpose || X || X || X
|-
| Determinant || X || X || X
|-
| Norm || X || || X
|-
| Inverse || X || X || X
|-
| Solve m=n || X || || X
|-
| Solve m>n || X || || X
|-
| LU || X || || X
|-
| Cholesky || X || || X
|-
| QR || X || || X
|-
| QRP || X || ||
|-
| SVD || X || ||
|-
| Eigen Symm || X || ||
|-
| Eigen General || X || ||
|}
The above table summarizes at a high level the capabilities available by matrix type. To see a complete list of featurs check out the following classes and factories. Note that the capabilities also vary by which interface you use. See interface specific documentation for that information. Procedural interface supports everything.
{| class="wikitable"
! Dense Real !! Fixed real !! Dense Complex
|-
| {{OpsDocLink|CommonOps}} || {{DocLink|org/ejml/alg/fixed/FixedOps3.html|FixedOps}} || {{OpsDocLink|CCommonOps}}
|-
| {{OpsDocLink|EigenOps}} || ||
|-
| {{OpsDocLink|MatrixFeatures}} || || {{OpsDocLink|CMatrixFeatures}}
|-
| {{OpsDocLink|MatrixVisualization}} || ||
|-
| {{OpsDocLink|NormOps}} || || {{OpsDocLink|CNormOps}}
|-
| {{OpsDocLink|RandomMatrices}} || || {{OpsDocLink|CRandomMatrices}}
|-
| {{OpsDocLink|SingularOps}} || ||
|-
| {{OpsDocLink|SpecializedOps}} || || {{OpsDocLink|CSpecializedOps}}
|}
= Other Features =
File IO
Visualization
9eada8689f13ce63b547db6acdaa0d367c45bb0c
Template:DataDocLink
10
34
91
2015-03-26T14:50:35Z
Peter
1
Created page with "{{DocLink|org/ejml/data/{{{1}}}.html|{{{1}}} }}"
wikitext
text/x-wiki
{{DocLink|org/ejml/data/{{{1}}}.html|{{{1}}} }}
47985d8030a3fa11eb4ac6d9e8f08dfbf649f3ab
Example Principal Component Analysis
0
13
99
40
2015-04-01T02:41:23Z
Peter
1
wikitext
text/x-wiki
Principal Component Analysis (PCA) is a popular and simple to implement classification technique, often used in face recognition. The following is an example of how to implement it in EJML using the procedural interface. It is assumed that the reader is already familiar with PCA.
Example on GitHub:
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/PrincipalComponentAnalysis.java PrincipalComponentAnalysis]
For additional information on PCA:
* [http://en.wikipedia.org/wiki/Principal_component_analysis General information on Wikipedia]
= Sample Code =
<syntaxhighlight lang="java">
/**
* <p>
* The following is a simple example of how to perform basic principal component analysis in EJML.
* </p>
*
* <p>
* Principal Component Analysis (PCA) is typically used to develop a linear model for a set of data
* (e.g. face images) which can then be used to test for membership. PCA works by converting the
* set of data to a new basis that is a subspace of the original set. The subspace is selected
* to maximize information.
* </p>
* <p>
* PCA is typically derived as an eigenvalue problem. However in this implementation {@link org.ejml.interfaces.decomposition.SingularValueDecomposition SVD}
* is used instead because it will produce a more numerically stable solution. Computation using EVD requires explicitly
* computing the variance of each sample set. The variance is computed by squaring the residual, which can
* cause loss of precision.
* </p>
*
* <p>
* Usage:<br>
* 1) call setup()<br>
* 2) For each sample (e.g. an image ) call addSample()<br>
* 3) After all the samples have been added call computeBasis()<br>
* 4) Call sampleToEigenSpace() , eigenToSampleSpace() , errorMembership() , response()
* </p>
*
* @author Peter Abeles
*/
public class PrincipalComponentAnalysis {
// principal component subspace is stored in the rows
private DenseMatrix64F V_t;
// how many principal components are used
private int numComponents;
// where the data is stored
private DenseMatrix64F A = new DenseMatrix64F(1,1);
private int sampleIndex;
// mean values of each element across all the samples
double mean[];
public PrincipalComponentAnalysis() {
}
/**
* Must be called before any other functions. Declares and sets up internal data structures.
*
* @param numSamples Number of samples that will be processed.
* @param sampleSize Number of elements in each sample.
*/
public void setup( int numSamples , int sampleSize ) {
mean = new double[ sampleSize ];
A.reshape(numSamples,sampleSize,false);
sampleIndex = 0;
numComponents = -1;
}
/**
* Adds a new sample of the raw data to internal data structure for later processing. All the samples
* must be added before computeBasis is called.
*
* @param sampleData Sample from original raw data.
*/
public void addSample( double[] sampleData ) {
if( A.getNumCols() != sampleData.length )
throw new IllegalArgumentException("Unexpected sample size");
if( sampleIndex >= A.getNumRows() )
throw new IllegalArgumentException("Too many samples");
for( int i = 0; i < sampleData.length; i++ ) {
A.set(sampleIndex,i,sampleData[i]);
}
sampleIndex++;
}
/**
* Computes a basis (the principal components) from the most dominant eigenvectors.
*
* @param numComponents Number of vectors it will use to describe the data. Typically much
* smaller than the number of elements in the input vector.
*/
public void computeBasis( int numComponents ) {
if( numComponents > A.getNumCols() )
throw new IllegalArgumentException("More components requested that the data's length.");
if( sampleIndex != A.getNumRows() )
throw new IllegalArgumentException("Not all the data has been added");
if( numComponents > sampleIndex )
throw new IllegalArgumentException("More data needed to compute the desired number of components");
this.numComponents = numComponents;
// compute the mean of all the samples
for( int i = 0; i < A.getNumRows(); i++ ) {
for( int j = 0; j < mean.length; j++ ) {
mean[j] += A.get(i,j);
}
}
for( int j = 0; j < mean.length; j++ ) {
mean[j] /= A.getNumRows();
}
// subtract the mean from the original data
for( int i = 0; i < A.getNumRows(); i++ ) {
for( int j = 0; j < mean.length; j++ ) {
A.set(i,j,A.get(i,j)-mean[j]);
}
}
// Compute SVD and save time by not computing U
SingularValueDecomposition<DenseMatrix64F> svd =
DecompositionFactory.svd(A.numRows, A.numCols, false, true, false);
if( !svd.decompose(A) )
throw new RuntimeException("SVD failed");
V_t = svd.getV(null,true);
DenseMatrix64F W = svd.getW(null);
// Singular values are in an arbitrary order initially
SingularOps.descendingOrder(null,false,W,V_t,true);
// strip off unneeded components and find the basis
V_t.reshape(numComponents,mean.length,true);
}
/**
* Returns a vector from the PCA's basis.
*
* @param which Which component's vector is to be returned.
* @return Vector from the PCA basis.
*/
public double[] getBasisVector( int which ) {
if( which < 0 || which >= numComponents )
throw new IllegalArgumentException("Invalid component");
DenseMatrix64F v = new DenseMatrix64F(1,A.numCols);
CommonOps.extract(V_t,which,which+1,0,A.numCols,v,0,0);
return v.data;
}
/**
* Converts a vector from sample space into eigen space.
*
* @param sampleData Sample space data.
* @return Eigen space projection.
*/
public double[] sampleToEigenSpace( double[] sampleData ) {
if( sampleData.length != A.getNumCols() )
throw new IllegalArgumentException("Unexpected sample length");
DenseMatrix64F mean = DenseMatrix64F.wrap(A.getNumCols(),1,this.mean);
DenseMatrix64F s = new DenseMatrix64F(A.getNumCols(),1,true,sampleData);
DenseMatrix64F r = new DenseMatrix64F(numComponents,1);
CommonOps.subtract(s, mean, s);
CommonOps.mult(V_t,s,r);
return r.data;
}
/**
* Converts a vector from eigen space into sample space.
*
* @param eigenData Eigen space data.
* @return Sample space projection.
*/
public double[] eigenToSampleSpace( double[] eigenData ) {
if( eigenData.length != numComponents )
throw new IllegalArgumentException("Unexpected sample length");
DenseMatrix64F s = new DenseMatrix64F(A.getNumCols(),1);
DenseMatrix64F r = DenseMatrix64F.wrap(numComponents,1,eigenData);
CommonOps.multTransA(V_t,r,s);
DenseMatrix64F mean = DenseMatrix64F.wrap(A.getNumCols(),1,this.mean);
CommonOps.add(s,mean,s);
return s.data;
}
/**
* <p>
* The membership error for a sample. If the error is less than a threshold then
* it can be considered a member. The threshold's value depends on the data set.
* </p>
* <p>
* The error is computed by projecting the sample into eigenspace then projecting
* it back into sample space and
* </p>
*
* @param sampleA The sample whose membership status is being considered.
* @return Its membership error.
*/
public double errorMembership( double[] sampleA ) {
double[] eig = sampleToEigenSpace(sampleA);
double[] reproj = eigenToSampleSpace(eig);
double total = 0;
for( int i = 0; i < reproj.length; i++ ) {
double d = sampleA[i] - reproj[i];
total += d*d;
}
return Math.sqrt(total);
}
/**
* Computes the dot product of each basis vector against the sample. Can be used as a measure
* for membership in the training sample set. High values correspond to a better fit.
*
* @param sample Sample of original data.
* @return Higher value indicates it is more likely to be a member of input dataset.
*/
public double response( double[] sample ) {
if( sample.length != A.numCols )
throw new IllegalArgumentException("Expected input vector to be in sample space");
DenseMatrix64F dots = new DenseMatrix64F(numComponents,1);
DenseMatrix64F s = DenseMatrix64F.wrap(A.numCols,1,sample);
CommonOps.mult(V_t,s,dots);
return NormOps.normF(dots);
}
}
</syntaxhighlight>
fa127b8de03a8b0e74ccb2f6dc946ba13bc0713d
Example Polynomial Fitting
0
14
100
42
2015-04-01T02:42:29Z
Peter
1
wikitext
text/x-wiki
In this example it is shown how EJML can be used to fit a polynomial of arbitrary degree to a set of data. The key concepts shown here are; 1) how to create a linear using LinearSolverFactory, 2) use an adjustable linear solver, 3) and effective matrix reshaping. This is all done using the procedural interface.
First a best fit polynomial is fit to a set of data and then a outliers are removed from the observation set and the coefficients recomputed. Outliers are removed efficiently using an adjustable solver that does not resolve the whole system again.
Example on GitHub:
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/PolynomialFit.java PolynomialFit]
= PolynomialFit Example Code =
<syntaxhighlight lang="java">
/**
* <p>
* This example demonstrates how a polynomial can be fit to a set of data. This is done by
* using a least squares solver that is adjustable. By using an adjustable solver elements
* can be inexpensively removed and the coefficients recomputed. This is much less expensive
* than resolving the whole system from scratch.
* </p>
* <p>
* The following is demonstrated:<br>
* <ol>
* <li>Creating a solver using LinearSolverFactory</li>
* <li>Using an adjustable solver</li>
* <li>reshaping</li>
* </ol>
* @author Peter Abeles
*/
public class PolynomialFit {
// Vandermonde matrix
DenseMatrix64F A;
// matrix containing computed polynomial coefficients
DenseMatrix64F coef;
// observation matrix
DenseMatrix64F y;
// solver used to compute
AdjustableLinearSolver solver;
/**
* Constructor.
*
* @param degree The polynomial's degree which is to be fit to the observations.
*/
public PolynomialFit( int degree ) {
coef = new DenseMatrix64F(degree+1,1);
A = new DenseMatrix64F(1,degree+1);
y = new DenseMatrix64F(1,1);
// create a solver that allows elements to be added or removed efficiently
solver = LinearSolverFactory.adjustable();
}
/**
* Returns the computed coefficients
*
* @return polynomial coefficients that best fit the data.
*/
public double[] getCoef() {
return coef.data;
}
/**
* Computes the best fit set of polynomial coefficients to the provided observations.
*
* @param samplePoints where the observations were sampled.
* @param observations A set of observations.
*/
public void fit( double samplePoints[] , double[] observations ) {
// Create a copy of the observations and put it into a matrix
y.reshape(observations.length,1,false);
System.arraycopy(observations,0, y.data,0,observations.length);
// reshape the matrix to avoid unnecessarily declaring new memory
// save values is set to false since its old values don't matter
A.reshape(y.numRows, coef.numRows,false);
// set up the A matrix
for( int i = 0; i < observations.length; i++ ) {
double obs = 1;
for( int j = 0; j < coef.numRows; j++ ) {
A.set(i,j,obs);
obs *= samplePoints[i];
}
}
// process the A matrix and see if it failed
if( !solver.setA(A) )
throw new RuntimeException("Solver failed");
// solver the the coefficients
solver.solve(y,coef);
}
/**
* Removes the observation that fits the model the worst and recomputes the coefficients.
* This is done efficiently by using an adjustable solver. Often times the elements with
* the largest errors are outliers and not part of the system being modeled. By removing them
* a more accurate set of coefficients can be computed.
*/
public void removeWorstFit() {
// find the observation with the most error
int worstIndex=-1;
double worstError = -1;
for( int i = 0; i < y.numRows; i++ ) {
double predictedObs = 0;
for( int j = 0; j < coef.numRows; j++ ) {
predictedObs += A.get(i,j)*coef.get(j,0);
}
double error = Math.abs(predictedObs- y.get(i,0));
if( error > worstError ) {
worstError = error;
worstIndex = i;
}
}
// nothing left to remove, so just return
if( worstIndex == -1 )
return;
// remove that observation
removeObservation(worstIndex);
// update A
solver.removeRowFromA(worstIndex);
// solve for the parameters again
solver.solve(y,coef);
}
/**
* Removes an element from the observation matrix.
*
* @param index which element is to be removed
*/
private void removeObservation( int index ) {
final int N = y.numRows-1;
final double d[] = y.data;
// shift
for( int i = index; i < N; i++ ) {
d[i] = d[i+1];
}
y.numRows--;
}
}
</syntaxhighlight>
ee8bad47407336256beafa6cac75d92ead7c0d25
Example Polynomial Roots
0
15
101
44
2015-04-01T02:44:16Z
Peter
1
wikitext
text/x-wiki
Eigenvalue decomposition can be used to find the roots in a polynomial by constructing the so called [http://en.wikipedia.org/wiki/Companion_matrix companion matrix]. While faster techniques do exist for root finding, this is one of the most stable and probably the easiest to implement.
Because the companion matrix is not symmetric a generalized eigenvalue [MatrixDecomposition decomposition] is needed. The roots of the polynomial may also be [http://en.wikipedia.org/wiki/Complex_number complex].
Example on GitHub:
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/PolynomialRootFinder.java PolynomialRootFinder]
= Example Code =
<syntaxhighlight lang="java">
public class PolynomialRootFinder {
/**
* <p>
* Given a set of polynomial coefficients, compute the roots of the polynomial. Depending on
* the polynomial being considered the roots may contain complex number. When complex numbers are
* present they will come in pairs of complex conjugates.
* </p>
*
* <p>
* Coefficients are ordered from least to most significant, e.g: y = c[0] + x*c[1] + x*x*c[2].
* </p>
*
* @param coefficients Coefficients of the polynomial.
* @return The roots of the polynomial
*/
public static Complex64F[] findRoots(double... coefficients) {
int N = coefficients.length-1;
// Construct the companion matrix
DenseMatrix64F c = new DenseMatrix64F(N,N);
double a = coefficients[N];
for( int i = 0; i < N; i++ ) {
c.set(i,N-1,-coefficients[i]/a);
}
for( int i = 1; i < N; i++ ) {
c.set(i,i-1,1);
}
// use generalized eigenvalue decomposition to find the roots
EigenDecomposition<DenseMatrix64F> evd = DecompositionFactory.eig(N,false);
evd.decompose(c);
Complex64F[] roots = new Complex64F[N];
for( int i = 0; i < N; i++ ) {
roots[i] = evd.getEigenvalue(i);
}
return roots;
}
}
</syntaxhighlight>
e97f27605f780505a47b17007f31b86ce3d1dd78
Example Customizing Equations
0
19
102
55
2015-04-01T02:47:36Z
Peter
1
wikitext
text/x-wiki
While Equations provides many of the most common functions used in Linear Algebra, there are many it does not provide. The following example demonstrates how to add your own functions to Equations allowing you to extend its capabilities.
Code on GitHub:
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/EquationCustomFunction.java EquationCustomFunction]
== Example ==
<syntaxhighlight lang="java">
/**
* Demonstration on how to create and use a custom function in Equation. A custom function must implement
* ManagerFunctions.Input1 or ManagerFunctions.InputN, depending on the number of inputs it takes.
*
* @author Peter Abeles
*/
public class EquationCustomFunction {
public static void main(String[] args) {
Random rand = new Random(234);
Equation eq = new Equation();
eq.getFunctions().add("multTransA",createMultTransA());
SimpleMatrix A = new SimpleMatrix(1,1); // will be resized
SimpleMatrix B = SimpleMatrix.random(3,4,-1,1,rand);
SimpleMatrix C = SimpleMatrix.random(3,4,-1,1,rand);
eq.alias(A,"A",B,"B",C,"C");
eq.process("A=multTransA(B,C)");
System.out.println("Found");
System.out.println(A);
System.out.println("Expected");
B.transpose().mult(C).print();
}
/**
* Create the function. Be sure to handle all possible input types and combinations correctly and provide
* meaningful error messages. The output matrix should be resized to fit the inputs.
*/
public static ManagerFunctions.InputN createMultTransA() {
return new ManagerFunctions.InputN() {
@Override
public Operation.Info create(List<Variable> inputs, ManagerTempVariables manager ) {
if( inputs.size() != 2 )
throw new RuntimeException("Two inputs required");
final Variable varA = inputs.get(0);
final Variable varB = inputs.get(1);
Operation.Info ret = new Operation.Info();
if( varA instanceof VariableMatrix && varB instanceof VariableMatrix ) {
// The output matrix or scalar variable must be created with the provided manager
final VariableMatrix output = manager.createMatrix();
ret.output = output;
ret.op = new Operation("multTransA-mm") {
@Override
public void process() {
DenseMatrix64F mA = ((VariableMatrix)varA).matrix;
DenseMatrix64F mB = ((VariableMatrix)varB).matrix;
output.matrix.reshape(mA.numCols,mB.numCols);
CommonOps.multTransA(mA,mB,output.matrix);
}
};
} else {
throw new IllegalArgumentException("Expected both inputs to be a matrix");
}
return ret;
}
};
}
}
</syntaxhighlight>
88a941ce1204876d8b26c3fc5d014ed6918fa7dd
129
102
2015-08-10T01:00:59Z
Peter
1
wikitext
text/x-wiki
While Equations provides many of the most common functions used in Linear Algebra, there are many it does not provide. The following example demonstrates how to add your own functions to Equations allowing you to extend its capabilities.
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/EquationCustomFunction.java EquationCustomFunction.java source code]
* <disqus>Discuss this example</disqus>
== Example ==
<syntaxhighlight lang="java">
/**
* Demonstration on how to create and use a custom function in Equation. A custom function must implement
* ManagerFunctions.Input1 or ManagerFunctions.InputN, depending on the number of inputs it takes.
*
* @author Peter Abeles
*/
public class EquationCustomFunction {
public static void main(String[] args) {
Random rand = new Random(234);
Equation eq = new Equation();
eq.getFunctions().add("multTransA",createMultTransA());
SimpleMatrix A = new SimpleMatrix(1,1); // will be resized
SimpleMatrix B = SimpleMatrix.random(3,4,-1,1,rand);
SimpleMatrix C = SimpleMatrix.random(3,4,-1,1,rand);
eq.alias(A,"A",B,"B",C,"C");
eq.process("A=multTransA(B,C)");
System.out.println("Found");
System.out.println(A);
System.out.println("Expected");
B.transpose().mult(C).print();
}
/**
* Create the function. Be sure to handle all possible input types and combinations correctly and provide
* meaningful error messages. The output matrix should be resized to fit the inputs.
*/
public static ManagerFunctions.InputN createMultTransA() {
return new ManagerFunctions.InputN() {
@Override
public Operation.Info create(List<Variable> inputs, ManagerTempVariables manager ) {
if( inputs.size() != 2 )
throw new RuntimeException("Two inputs required");
final Variable varA = inputs.get(0);
final Variable varB = inputs.get(1);
Operation.Info ret = new Operation.Info();
if( varA instanceof VariableMatrix && varB instanceof VariableMatrix ) {
// The output matrix or scalar variable must be created with the provided manager
final VariableMatrix output = manager.createMatrix();
ret.output = output;
ret.op = new Operation("multTransA-mm") {
@Override
public void process() {
DenseMatrix64F mA = ((VariableMatrix)varA).matrix;
DenseMatrix64F mB = ((VariableMatrix)varB).matrix;
output.matrix.reshape(mA.numCols,mB.numCols);
CommonOps.multTransA(mA,mB,output.matrix);
}
};
} else {
throw new IllegalArgumentException("Expected both inputs to be a matrix");
}
return ret;
}
};
}
}
</syntaxhighlight>
d118f176b154b6fa0e92f79aba974b4bcd1e2632
Example Customizing SimpleMatrix
0
16
103
48
2015-04-01T02:51:00Z
Peter
1
wikitext
text/x-wiki
[[SimpleMatrix]] provides an easy to use object oriented way of doing linear algebra. There are many other problems which use matrices and could use SimpleMatrix's functionality. In those situations it is desirable to simply extend SimpleMatrix and add additional functions as needed.
Naively extending SimpleMatrix is problematic because internally SimpleMatrix creates new matrices and its functions returned objects of the wrong type. To get around these problems SimpleBase is extended instead and its abstract functions implemented. SimpleBase provides all the core functionality of SimpleMatrix, with the exception of its static functions.
An example is provided below where a new class called StatisticsMatrix is created that adds statistical functions to SimpleMatrix. Usage examples are provided in its main() function.
Code on GitHub:
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/StatisticsMatrix.java StatisticsMatrix]
= Example =
<syntaxhighlight lang="java">
/**
* Example of how to extend "SimpleMatrix" and add your own functionality. In this case
* two basic statistic operations are added. Since SimpleBase is extended and StatisticsMatrix
* is specified as the generics type, all "SimpleMatrix" operations return a matrix of
* type StatisticsMatrix, ensuring strong typing.
*
* @author Peter Abeles
*/
public class StatisticsMatrix extends SimpleBase<StatisticsMatrix> {
public StatisticsMatrix( int numRows , int numCols ) {
super(numRows,numCols);
}
protected StatisticsMatrix(){}
/**
* Wraps a StatisticsMatrix around 'm'. Does NOT create a copy of 'm' but saves a reference
* to it.
*/
public static StatisticsMatrix wrap( DenseMatrix64F m ) {
StatisticsMatrix ret = new StatisticsMatrix();
ret.mat = m;
return ret;
}
/**
* Computes the mean or average of all the elements.
*
* @return mean
*/
public double mean() {
double total = 0;
final int N = getNumElements();
for( int i = 0; i < N; i++ ) {
total += get(i);
}
return total/N;
}
/**
* Computes the unbiased standard deviation of all the elements.
*
* @return standard deviation
*/
public double stdev() {
double m = mean();
double total = 0;
final int N = getNumElements();
if( N <= 1 )
throw new IllegalArgumentException("There must be more than one element to compute stdev");
for( int i = 0; i < N; i++ ) {
double x = get(i);
total += (x - m)*(x - m);
}
total /= (N-1);
return Math.sqrt(total);
}
/**
* Returns a matrix of StatisticsMatrix type so that SimpleMatrix functions create matrices
* of the correct type.
*/
@Override
protected StatisticsMatrix createMatrix(int numRows, int numCols) {
return new StatisticsMatrix(numRows,numCols);
}
public static void main( String args[] ) {
Random rand = new Random(24234);
int N = 500;
// create two vectors whose elements are drawn from uniform distributions
StatisticsMatrix A = StatisticsMatrix.wrap(RandomMatrices.createRandom(N,1,0,1,rand));
StatisticsMatrix B = StatisticsMatrix.wrap(RandomMatrices.createRandom(N,1,1,2,rand));
// the mean should be about 0.5
System.out.println("Mean of A is "+A.mean());
// the mean should be about 1.5
System.out.println("Mean of B is "+B.mean());
StatisticsMatrix C = A.plus(B);
// the mean should be about 2.0
System.out.println("Mean of C = A + B is "+C.mean());
System.out.println("Standard deviation of A is "+A.stdev());
System.out.println("Standard deviation of B is "+B.stdev());
System.out.println("Standard deviation of C is "+C.stdev());
}
}
</syntaxhighlight>
5c7d96aaf734cad0da556b1677a7b6b6f24abe90
130
103
2015-08-10T01:01:29Z
Peter
1
wikitext
text/x-wiki
[[SimpleMatrix]] provides an easy to use object oriented way of doing linear algebra. There are many other problems which use matrices and could use SimpleMatrix's functionality. In those situations it is desirable to simply extend SimpleMatrix and add additional functions as needed.
Naively extending SimpleMatrix is problematic because internally SimpleMatrix creates new matrices and its functions returned objects of the wrong type. To get around these problems SimpleBase is extended instead and its abstract functions implemented. SimpleBase provides all the core functionality of SimpleMatrix, with the exception of its static functions.
An example is provided below where a new class called StatisticsMatrix is created that adds statistical functions to SimpleMatrix. Usage examples are provided in its main() function.
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/StatisticsMatrix.java StatisticsMatrix.java source code]
* <disqus>Discuss this example</disqus>
= Example =
<syntaxhighlight lang="java">
/**
* Example of how to extend "SimpleMatrix" and add your own functionality. In this case
* two basic statistic operations are added. Since SimpleBase is extended and StatisticsMatrix
* is specified as the generics type, all "SimpleMatrix" operations return a matrix of
* type StatisticsMatrix, ensuring strong typing.
*
* @author Peter Abeles
*/
public class StatisticsMatrix extends SimpleBase<StatisticsMatrix> {
public StatisticsMatrix( int numRows , int numCols ) {
super(numRows,numCols);
}
protected StatisticsMatrix(){}
/**
* Wraps a StatisticsMatrix around 'm'. Does NOT create a copy of 'm' but saves a reference
* to it.
*/
public static StatisticsMatrix wrap( DenseMatrix64F m ) {
StatisticsMatrix ret = new StatisticsMatrix();
ret.mat = m;
return ret;
}
/**
* Computes the mean or average of all the elements.
*
* @return mean
*/
public double mean() {
double total = 0;
final int N = getNumElements();
for( int i = 0; i < N; i++ ) {
total += get(i);
}
return total/N;
}
/**
* Computes the unbiased standard deviation of all the elements.
*
* @return standard deviation
*/
public double stdev() {
double m = mean();
double total = 0;
final int N = getNumElements();
if( N <= 1 )
throw new IllegalArgumentException("There must be more than one element to compute stdev");
for( int i = 0; i < N; i++ ) {
double x = get(i);
total += (x - m)*(x - m);
}
total /= (N-1);
return Math.sqrt(total);
}
/**
* Returns a matrix of StatisticsMatrix type so that SimpleMatrix functions create matrices
* of the correct type.
*/
@Override
protected StatisticsMatrix createMatrix(int numRows, int numCols) {
return new StatisticsMatrix(numRows,numCols);
}
public static void main( String args[] ) {
Random rand = new Random(24234);
int N = 500;
// create two vectors whose elements are drawn from uniform distributions
StatisticsMatrix A = StatisticsMatrix.wrap(RandomMatrices.createRandom(N,1,0,1,rand));
StatisticsMatrix B = StatisticsMatrix.wrap(RandomMatrices.createRandom(N,1,1,2,rand));
// the mean should be about 0.5
System.out.println("Mean of A is "+A.mean());
// the mean should be about 1.5
System.out.println("Mean of B is "+B.mean());
StatisticsMatrix C = A.plus(B);
// the mean should be about 2.0
System.out.println("Mean of C = A + B is "+C.mean());
System.out.println("Standard deviation of A is "+A.stdev());
System.out.println("Standard deviation of B is "+B.stdev());
System.out.println("Standard deviation of C is "+C.stdev());
}
}
</syntaxhighlight>
13ae4199ecaba5d3b459f5e1559886d500ab111a
Example Fixed Sized Matrices
0
17
104
49
2015-04-01T02:52:03Z
Peter
1
wikitext
text/x-wiki
Array access adds a significant amount of overhead to matrix operations. A fixed sized matrix gets around that issue by having each element in the matrix be a variable in the class. EJML provides support for fixed sized matrices and vectors up to 6x6, at which point it loses its advantage. The example below demonstrates how to use a fixed sized matrix and convert to other matrix types in EJML.
Code on GitHub:
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/ExampleFixedSizedMatrix.java ExampleFixedSizedMatrix]
== Example ==
<syntaxhighlight lang="java">
/**
* In some applications a small fixed sized matrix can speed things up a lot, e.g. 8 times faster. One application
* which uses small matrices is graphics and rigid body motion, which extensively uses 3x3 and 4x4 matrices. This
* example is to show some examples of how you can use a fixed sized matrix.
*
* @author Peter Abeles
*/
public class ExampleFixedSizedMatrix {
public static void main( String args[] ) {
// declare the matrix
FixedMatrix3x3_64F a = new FixedMatrix3x3_64F();
FixedMatrix3x3_64F b = new FixedMatrix3x3_64F();
// Can assign values the usual way
for( int i = 0; i < 3; i++ ) {
for( int j = 0; j < 3; j++ ) {
a.set(i,j,i+j+1);
}
}
// Direct manipulation of each value is the fastest way to assign/read values
a.a11 = 12;
a.a23 = 64;
// can print the usual way too
a.print();
// most of the standard operations are support
FixedOps3.transpose(a,b);
b.print();
System.out.println("Determinant = "+FixedOps3.det(a));
// matrix-vector operations are also supported
// Constructors for vectors and matrices can be used to initialize its value
FixedMatrix3_64F v = new FixedMatrix3_64F(1,2,3);
FixedMatrix3_64F result = new FixedMatrix3_64F();
FixedOps3.mult(a,v,result);
// Conversion into DenseMatrix64F can also be done
DenseMatrix64F dm = ConvertMatrixType.convert(a,null);
dm.print();
// This can be useful if you need do more advanced operations
SimpleMatrix sv = SimpleMatrix.wrap(dm).svd().getV();
// can then convert it back into a fixed matrix
FixedMatrix3x3_64F fv = ConvertMatrixType.convert(sv.getMatrix(),(FixedMatrix3x3_64F)null);
System.out.println("Original simple matrix and converted fixed matrix");
sv.print();
fv.print();
}
}
</syntaxhighlight>
f2ce2d2c16908d1454f3fe5c117045d3f1944988
131
104
2015-08-10T01:01:55Z
Peter
1
wikitext
text/x-wiki
Array access adds a significant amount of overhead to matrix operations. A fixed sized matrix gets around that issue by having each element in the matrix be a variable in the class. EJML provides support for fixed sized matrices and vectors up to 6x6, at which point it loses its advantage. The example below demonstrates how to use a fixed sized matrix and convert to other matrix types in EJML.
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/ExampleFixedSizedMatrix.java ExampleFixedSizedMatrix]
* <disqus>Discuss this example</disqus>
== Example ==
<syntaxhighlight lang="java">
/**
* In some applications a small fixed sized matrix can speed things up a lot, e.g. 8 times faster. One application
* which uses small matrices is graphics and rigid body motion, which extensively uses 3x3 and 4x4 matrices. This
* example is to show some examples of how you can use a fixed sized matrix.
*
* @author Peter Abeles
*/
public class ExampleFixedSizedMatrix {
public static void main( String args[] ) {
// declare the matrix
FixedMatrix3x3_64F a = new FixedMatrix3x3_64F();
FixedMatrix3x3_64F b = new FixedMatrix3x3_64F();
// Can assign values the usual way
for( int i = 0; i < 3; i++ ) {
for( int j = 0; j < 3; j++ ) {
a.set(i,j,i+j+1);
}
}
// Direct manipulation of each value is the fastest way to assign/read values
a.a11 = 12;
a.a23 = 64;
// can print the usual way too
a.print();
// most of the standard operations are support
FixedOps3.transpose(a,b);
b.print();
System.out.println("Determinant = "+FixedOps3.det(a));
// matrix-vector operations are also supported
// Constructors for vectors and matrices can be used to initialize its value
FixedMatrix3_64F v = new FixedMatrix3_64F(1,2,3);
FixedMatrix3_64F result = new FixedMatrix3_64F();
FixedOps3.mult(a,v,result);
// Conversion into DenseMatrix64F can also be done
DenseMatrix64F dm = ConvertMatrixType.convert(a,null);
dm.print();
// This can be useful if you need do more advanced operations
SimpleMatrix sv = SimpleMatrix.wrap(dm).svd().getV();
// can then convert it back into a fixed matrix
FixedMatrix3x3_64F fv = ConvertMatrixType.convert(sv.getMatrix(),(FixedMatrix3x3_64F)null);
System.out.println("Original simple matrix and converted fixed matrix");
sv.print();
fv.print();
}
}
</syntaxhighlight>
9883dde3bf452e9af77f6530453af75bff3c4849
Example Complex Math
0
27
105
76
2015-04-01T02:53:02Z
Peter
1
wikitext
text/x-wiki
The Complex64F data type stores a single complex number. Inside the ComplexMath64F class are functions for performing standard math operations on Complex64F, such as addition and division. The example below demonstrates how to perform these operations.
Code on GitHub:
[https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/ExampleComplexMath.java ExampleComplexMath]
== Example ==
<syntaxhighlight lang="java">
/**
* Demonstration of different operations that can be performed on complex numbers.
*
* @author Peter Abeles
*/
public class ExampleComplexMath {
public static void main( String []args ) {
Complex64F a = new Complex64F(1,2);
Complex64F b = new Complex64F(-1,-0.6);
Complex64F c = new Complex64F();
ComplexPolar64F polarC = new ComplexPolar64F();
System.out.println("a = "+a);
System.out.println("b = "+b);
System.out.println("------------------");
ComplexMath64F.plus(a, b, c);
System.out.println("a + b = "+c);
ComplexMath64F.minus(a, b, c);
System.out.println("a - b = "+c);
ComplexMath64F.multiply(a, b, c);
System.out.println("a * b = "+c);
ComplexMath64F.divide(a, b, c);
System.out.println("a / b = "+c);
System.out.println("------------------");
ComplexPolar64F polarA = new ComplexPolar64F();
ComplexMath64F.convert(a, polarA);
System.out.println("polar notation of a = "+polarA);
ComplexMath64F.pow(polarA, 3, polarC);
System.out.println("a ** 3 = "+polarC);
ComplexMath64F.convert(polarC, c);
System.out.println("a ** 3 = "+c);
}
}
</syntaxhighlight>
1502699d9493ac35827df313adf73cf213bdad26
132
105
2015-08-10T01:02:41Z
Peter
1
wikitext
text/x-wiki
The Complex64F data type stores a single complex number. Inside the ComplexMath64F class are functions for performing standard math operations on Complex64F, such as addition and division. The example below demonstrates how to perform these operations.
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/ExampleComplexMath.java ExampleComplexMath.java source code]
* <disqus>Discuss this example</disqus>
== Example ==
<syntaxhighlight lang="java">
/**
* Demonstration of different operations that can be performed on complex numbers.
*
* @author Peter Abeles
*/
public class ExampleComplexMath {
public static void main( String []args ) {
Complex64F a = new Complex64F(1,2);
Complex64F b = new Complex64F(-1,-0.6);
Complex64F c = new Complex64F();
ComplexPolar64F polarC = new ComplexPolar64F();
System.out.println("a = "+a);
System.out.println("b = "+b);
System.out.println("------------------");
ComplexMath64F.plus(a, b, c);
System.out.println("a + b = "+c);
ComplexMath64F.minus(a, b, c);
System.out.println("a - b = "+c);
ComplexMath64F.multiply(a, b, c);
System.out.println("a * b = "+c);
ComplexMath64F.divide(a, b, c);
System.out.println("a / b = "+c);
System.out.println("------------------");
ComplexPolar64F polarA = new ComplexPolar64F();
ComplexMath64F.convert(a, polarA);
System.out.println("polar notation of a = "+polarA);
ComplexMath64F.pow(polarA, 3, polarC);
System.out.println("a ** 3 = "+polarC);
ComplexMath64F.convert(polarC, c);
System.out.println("a ** 3 = "+c);
}
}
</syntaxhighlight>
860d2ba1386820590673f5a27f51afec87350cbd
Download
0
6
106
95
2015-04-01T02:58:53Z
Peter
1
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
Current status of developmental code:
[https://travis-ci.org/lessthanoptimal/ejml https://api.travis-ci.org/lessthanoptimal/ejml.png]
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.26/ EJML Downloads]
== Gradle and Maven ==
EJML is broken up into several packages (see list below) and including each individually can be tedious. To include all the packages simply reference "main:all", as is shown below:
Gradle:
<syntaxhighlight lang="groovy">
compile group: 'org.ejml', name: 'main:all', version: '0.27'
</syntaxhighlight>
Maven:
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>main:all</artifactId>
<version>0.27</version>
</dependency>
</syntaxhighlight>
Individual modules:
{| class="wikitable"
! Module Name !! Description
|-
| main:core || Contains core data structures
|-
| main:dense64 || Algorithms for dense real 64-bit floats
|-
| main:denseC64 || Algorithms for dense complex 64-bit floats
|-
| main:equation || Equations interface
|-
| main:simple || Object oriented SimpleMatrix interface
|}
567fb8371d6d4fd352d59936828fb16f49252aab
112
106
2015-04-01T05:56:12Z
Peter
1
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
Current status of developmental code:
[https://travis-ci.org/lessthanoptimal/ejml https://api.travis-ci.org/lessthanoptimal/ejml.png]
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.26/ EJML Downloads]
== Gradle and Maven ==
EJML is broken up into several packages (see list below) and including each individually can be tedious. To include all the packages simply reference "main:all", as is shown below:
Gradle:
<syntaxhighlight lang="groovy">
compile group: 'org.ejml', name: 'all', version: '0.27'
</syntaxhighlight>
Maven:
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>all</artifactId>
<version>0.27</version>
</dependency>
</syntaxhighlight>
Individual modules:
{| class="wikitable"
! Module Name !! Description
|-
| core || Contains core data structures
|-
| dense64 || Algorithms for dense real 64-bit floats
|-
| denseC64 || Algorithms for dense complex 64-bit floats
|-
| equation || Equations interface
|-
| simple || Object oriented SimpleMatrix interface
|}
720a70f4397764df2ee8da7b819c20b007ab3142
113
112
2015-04-01T14:56:35Z
Peter
1
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
Current status of developmental code:
[https://travis-ci.org/lessthanoptimal/ejml https://api.travis-ci.org/lessthanoptimal/ejml.png]
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.27/ EJML Downloads]
== Gradle and Maven ==
EJML is broken up into several packages (see list below) and including each individually can be tedious. To include all the packages simply reference "main:all", as is shown below:
Gradle:
<syntaxhighlight lang="groovy">
compile group: 'org.ejml', name: 'all', version: '0.27'
</syntaxhighlight>
Maven:
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>all</artifactId>
<version>0.27</version>
</dependency>
</syntaxhighlight>
Individual modules:
{| class="wikitable"
! Module Name !! Description
|-
| core || Contains core data structures
|-
| dense64 || Algorithms for dense real 64-bit floats
|-
| denseC64 || Algorithms for dense complex 64-bit floats
|-
| equation || Equations interface
|-
| simple || Object oriented SimpleMatrix interface
|}
8c1ab6229aacb98fe0b1fc07eef3c1d2f2159123
120
113
2015-08-09T18:53:07Z
Peter
1
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
Current status of developmental code:
[https://travis-ci.org/lessthanoptimal/ejml https://api.travis-ci.org/lessthanoptimal/ejml.png]
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.28/ EJML Downloads]
== Gradle and Maven ==
EJML is broken up into several packages (see list below) and including each individually can be tedious. To include all the packages simply reference "main:all", as is shown below:
Gradle:
<syntaxhighlight lang="groovy">
compile group: 'org.ejml', name: 'all', version: '0.28'
</syntaxhighlight>
Maven:
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>all</artifactId>
<version>0.28</version>
</dependency>
</syntaxhighlight>
Individual modules:
{| class="wikitable"
! Module Name !! Description
|-
| core || Contains core data structures
|-
| dense64 || Algorithms for dense real 64-bit floats
|-
| denseC64 || Algorithms for dense complex 64-bit floats
|-
| equation || Equations interface
|-
| simple || Object oriented SimpleMatrix interface
|}
46ae8faada1612f58c29e2402af585a071baaa2b
151
120
2016-01-23T21:53:50Z
Peter
1
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
Current status of developmental code:
[https://travis-ci.org/lessthanoptimal/ejml https://api.travis-ci.org/lessthanoptimal/ejml.png]
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.29/ EJML Downloads]
== Gradle and Maven ==
EJML is broken up into several packages (see list below) and including each individually can be tedious. To include all the packages simply reference "main:all", as is shown below:
Gradle:
<syntaxhighlight lang="groovy">
compile group: 'org.ejml', name: 'all', version: '0.29'
</syntaxhighlight>
Maven:
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>all</artifactId>
<version>0.29</version>
</dependency>
</syntaxhighlight>
Individual modules:
{| class="wikitable"
! Module Name !! Description
|-
| core || Contains core data structures
|-
| dense64 || Algorithms for dense real 64-bit floats
|-
| denseC64 || Algorithms for dense complex 64-bit floats
|-
| equation || Equations interface
|-
| simple || Object oriented SimpleMatrix interface
|}
eedeede378cb6d07f81f0132906dae3bfc1d88a1
160
151
2016-11-09T19:20:17Z
Peter
1
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
Current status of developmental code:
[https://travis-ci.org/lessthanoptimal/ejml https://api.travis-ci.org/lessthanoptimal/ejml.png]
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.30/ EJML Downloads]
== Gradle and Maven ==
EJML is broken up into several packages (see list below) and including each individually can be tedious. To include all the packages simply reference "main:all", as is shown below:
Gradle:
<syntaxhighlight lang="groovy">
compile group: 'org.ejml', name: 'all', version: '0.30'
</syntaxhighlight>
Maven:
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>all</artifactId>
<version>0.30</version>
</dependency>
</syntaxhighlight>
Individual modules:
{| class="wikitable"
! Module Name !! Description
|-
| core || Contains core data structures
|-
| dense64 || Algorithms for dense real 64-bit floats
|-
| denseC64 || Algorithms for dense complex 64-bit floats
|-
| equation || Equations interface
|-
| simple || Object oriented SimpleMatrix interface
|}
31c5567223a12137d6c7015925e9ccee3085d1a6
161
160
2016-11-09T19:20:41Z
Peter
1
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
Current status of developmental code:
[https://travis-ci.org/lessthanoptimal/ejml https://api.travis-ci.org/lessthanoptimal/ejml.png]
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.30/ EJML Downloads]
== Gradle and Maven ==
EJML is broken up into several packages (see list below) and including each individually can be tedious. To include all the packages simply reference "all", as is shown below:
Gradle:
<syntaxhighlight lang="groovy">
compile group: 'org.ejml', name: 'all', version: '0.30'
</syntaxhighlight>
Maven:
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>all</artifactId>
<version>0.30</version>
</dependency>
</syntaxhighlight>
Individual modules:
{| class="wikitable"
! Module Name !! Description
|-
| core || Contains core data structures
|-
| dense64 || Algorithms for dense real 64-bit floats
|-
| denseC64 || Algorithms for dense complex 64-bit floats
|-
| equation || Equations interface
|-
| simple || Object oriented SimpleMatrix interface
|}
deb824ef35265cdbf2e4ad8530621030fb782905
205
161
2017-05-18T04:25:19Z
Peter
1
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
Current status of developmental code:
[https://travis-ci.org/lessthanoptimal/ejml https://api.travis-ci.org/lessthanoptimal/ejml.png]
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.31/ EJML Downloads]
== Gradle and Maven ==
EJML is broken up into several packages (see list below) and including each individually can be tedious. To include all the packages simply reference "all", as is shown below:
Gradle:
<syntaxhighlight lang="groovy">
compile group: 'org.ejml', name: 'ejml-all', version: '0.31'
</syntaxhighlight>
Maven:
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>ejml-all</artifactId>
<version>0.31</version>
</dependency>
</syntaxhighlight>
Individual modules:
{| class="wikitable"
! Module Name !! Description
|-
| core || Contains core data structures
|-
| dense64 || Algorithms for dense real 64-bit floats
|-
| denseC64 || Algorithms for dense complex 64-bit floats
|-
| equation || Equations interface
|-
| simple || Object oriented SimpleMatrix interface
|}
b82c6feeeb2aa6a6fea55890d553213457fbd4c0
206
205
2017-05-18T04:31:03Z
Peter
1
/* Gradle and Maven */
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
Current status of developmental code:
[https://travis-ci.org/lessthanoptimal/ejml https://api.travis-ci.org/lessthanoptimal/ejml.png]
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.31/ EJML Downloads]
== Gradle and Maven ==
EJML is broken up into several packages (see list below) and including each individually can be tedious. To include all the packages simply reference "all", as is shown below:
Gradle:
<syntaxhighlight lang="groovy">
compile group: 'org.ejml', name: 'ejml-all', version: '0.31'
</syntaxhighlight>
Maven:
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>ejml-all</artifactId>
<version>0.31</version>
</dependency>
</syntaxhighlight>
Individual modules:
{| class="wikitable"
! Module Name !! Description
|-
| ejml-all || All the modules
|-
| ejml-ddense || Dense Real Double Matrices
|-
| ejml-fdense || Dense Real Float Matrices
|-
| ejml-zdense || Dense Complex Double Matrices
|-
| ejml-cdense || Dense Complex Float Matrices
|-
| ejml-simple || SimpleMatrix and Equations
|-
| ejml-dsparse || Sparse Real Double Matrices
|}
fb1dce867b04144e48016cbe137959faa58c1403
Main Page
0
1
107
74
2015-04-01T03:09:58Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" width="500pt" align="center" |
{|width="280pt" style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.27''
|-
| '''Date:''' ''April 1, 2015''
|-
| [[Change Log]]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [http://code.google.com/p/efficient-java-matrix-library/issues/list Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="850pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
c313747b83182fabd2c09bfc3d7eb6a78928431d
110
107
2015-04-01T03:15:41Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" width="500pt" align="center" |
{|width="280pt" style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.27''
|-
| '''Date:''' ''April 1, 2015''
|-
| [[Change Log]]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="850pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
ab5338624858437265621471d13ec9f8c1f13776
115
110
2015-04-05T12:52:38Z
Peter
1
Centered version + date better
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.27''
|-
| '''Date:''' ''April 1, 2015''
|-
| [[Change Log]]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="850pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
41befee770caefca06b4c808cfd0490760672e69
118
115
2015-08-09T18:51:07Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.28''
|-
| '''Date:''' ''August 9, 2015''
|-
| [[Change Log]]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="850pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
d245c7498230c2960c89e5d1b2889bcefd101f6a
152
118
2016-01-23T21:57:35Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.29''
|-
| '''Date:''' ''January 23, 2015''
|-
| [[Change Log]]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="850pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
0e3bb1ebabe396c057e3c111ac2dff33e8699fb0
154
152
2016-04-03T05:37:31Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.29''
|-
| '''Date:''' ''January 23, 2016''
|-
| [[Change Log]]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="850pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
387185a2913cebe6aa9214acef7f81c99e7e53fa
156
154
2016-11-09T18:28:41Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.30''
|-
| '''Date:''' ''November 9, 2016''
|-
| [[Change Log]]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="850pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
849771e9a817dba5922e0604d6586d54b94bfe2a
158
156
2016-11-09T19:08:09Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.30''
|-
| '''Date:''' ''November 9, 2016''
|-
| [[Change Log]]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="850pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 60%;" | Decomposition || style="width: 20%;" |Dense Real || style="width: 20%;" |Dense Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" |
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" |
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" |
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" |
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" |
|}
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
fac6ef30712f788bc64470a95a7bc4e8ea16ba91
172
158
2017-01-15T15:02:33Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.30''
|-
| '''Date:''' ''November 9, 2016''
|-
| [[Change Log]]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
<span style="font-size:200%">[[The_Great_Refactoring| Preview of Major Upcoming Changes in 0.31]]</span>
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="850pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 60%;" | Decomposition || style="width: 20%;" |Dense Real || style="width: 20%;" |Dense Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" |
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" |
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" |
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" |
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" |
|}
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
07bbfbc50a92bd7c337d0f90e670b86ef242c89a
201
172
2017-01-25T01:48:35Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.30''
|-
| '''Date:''' ''November 9, 2016''
|-
| [[Change Log]]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
<span style="font-size:200%">[[The_Great_Refactoring| Preview of Major Upcoming Changes in 0.31]]</span>
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="850pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support is provided for doubles only. Next release (v0.31) will provide support for floats.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
28818cc93d780471477ee6bf4c4d5d3b61f9d501
202
201
2017-01-25T01:52:07Z
Peter
1
/* Functionality */
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.30''
|-
| '''Date:''' ''November 9, 2016''
|-
| [[Change Log]]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
<span style="font-size:200%">[[The_Great_Refactoring| Preview of Major Upcoming Changes in 0.31]]</span>
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support is provided for doubles only. Next release (v0.31) will provide support for floats.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
6eb99f243cdde5a48d71e2c329af47885c714e74
203
202
2017-05-18T04:23:04Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.31''
|-
| '''Date:''' ''May 17, 2017''
|-
| [[Change Log]]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
<span style="font-size:200%">[[The_Great_Refactoring| Preview of Major Upcoming Changes in 0.31]]</span>
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
5615c2078b72da9e5c668f5c52d75ecaa8513b60
204
203
2017-05-18T04:23:35Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.31''
|-
| '''Date:''' ''May 17, 2017''
|-
| [[Change Log]]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
db77abbd44841f1f802a87236f6f96502c0d9d04
Change Log
0
35
108
2015-04-01T03:11:58Z
Peter
1
Created page with "== Version 0.27 == Data: 2015/04/01 * Added SimpleMatrix.randomNormal() for drawing numbers from a normal distribution with zero mean * Added EjmlUnitTests.assertEquals() an..."
wikitext
text/x-wiki
== Version 0.27 ==
Data: 2015/04/01
* Added SimpleMatrix.randomNormal() for drawing numbers from a normal distribution with zero mean
* Added EjmlUnitTests.assertEquals() and similar for SimpleMatrix
* Removed DenseMatrix64F.setReshape()
** Matrix.set(matrix) will now reshape the matrix that's being assigned
* Triangle quality now just uses diagonal elements to scale results
* Support for complex matrices
** Thanks IHMC (http://ihmc.us) for funding parts of this addition
** Basic operations (e.g. multiplication, addition, ... etc)
** LU Decomposition + Linear Solver
** QR Decomposition + Linear Solver
** Cholesky Decomposition + Linear Solver
** Square Matrices: inverse, solve, determinant
** Overdetermined: solve
* ComplexMath64F
** Added sqrt(Complex64F)
* Tweaked matrix inheritance to better support the addition of complex matrices
* Added RandomMatrices setGaussian() and createGaussian()
* Changed how SimpleMatrix computes its threshold for singular values
** Farley Lai noticed this issue
* Added SingularOps.singularThreshold()
* Added no argument rank and nullity for SVD using default threshold.
* SimpleMatrix.loadCSV() now supports derived types
* Added primitive 32bit data structures to make adding 32bit support in the future smoother
* Equation
** 1x1 matrix can be assigned to a double scalar
** When referencing a single element in a matrix it will be extracted as a scalar and not a 1x1 matrix.
** Added sqrt() to parser
** lookupDouble() will now work on 1x1 matrices
* CommonOps
** Added dot(a,b) for dot product between two vectors
** Added extractRow and extractColumn
* FixedOps
** Added extractRow and extractColumn. Thanks nknize for inspiring this modification with a pull request
** Added subtract and subtractEquals. Thanks nknize for the pull request
* Added determinant to Cholesky decomposition interface
* Added getDecomposition() to LinearSolver to provide access to internal classes, which can be useful in some specialized cases. Alternatives were very ugly.
a5b4e64c2a09ce1562950bd460cd17e91ccf539f
119
108
2015-08-09T18:52:22Z
Peter
1
/* Version 0.27 */
wikitext
text/x-wiki
== Version 0.28 ==
Date: 2015/07/09
* Equations
** Fixed bug where bounds for a submatrix-scalar assignment was being checked using col,row instead of row,col
** Thanks lenhhoxung for reporting this bug
* FixedOps
** Added vector equivalents for all element-wise matrix operations
** Added multAdd operators
== Version 0.27 ==
Date: 2015/04/01
* Added SimpleMatrix.randomNormal() for drawing numbers from a normal distribution with zero mean
* Added EjmlUnitTests.assertEquals() and similar for SimpleMatrix
* Removed DenseMatrix64F.setReshape()
** Matrix.set(matrix) will now reshape the matrix that's being assigned
* Triangle quality now just uses diagonal elements to scale results
* Support for complex matrices
** Thanks IHMC (http://ihmc.us) for funding parts of this addition
** Basic operations (e.g. multiplication, addition, ... etc)
** LU Decomposition + Linear Solver
** QR Decomposition + Linear Solver
** Cholesky Decomposition + Linear Solver
** Square Matrices: inverse, solve, determinant
** Overdetermined: solve
* ComplexMath64F
** Added sqrt(Complex64F)
* Tweaked matrix inheritance to better support the addition of complex matrices
* Added RandomMatrices setGaussian() and createGaussian()
* Changed how SimpleMatrix computes its threshold for singular values
** Farley Lai noticed this issue
* Added SingularOps.singularThreshold()
* Added no argument rank and nullity for SVD using default threshold.
* SimpleMatrix.loadCSV() now supports derived types
* Added primitive 32bit data structures to make adding 32bit support in the future smoother
* Equation
** 1x1 matrix can be assigned to a double scalar
** When referencing a single element in a matrix it will be extracted as a scalar and not a 1x1 matrix.
** Added sqrt() to parser
** lookupDouble() will now work on 1x1 matrices
* CommonOps
** Added dot(a,b) for dot product between two vectors
** Added extractRow and extractColumn
* FixedOps
** Added extractRow and extractColumn. Thanks nknize for inspiring this modification with a pull request
** Added subtract and subtractEquals. Thanks nknize for the pull request
* Added determinant to Cholesky decomposition interface
* Added getDecomposition() to LinearSolver to provide access to internal classes, which can be useful in some specialized cases. Alternatives were very ugly.
7de274de8abd0034bde3478e22ffd8b8915636e4
159
119
2016-11-09T19:10:14Z
Peter
1
wikitext
text/x-wiki
== Version 0.30 ==
2016/11/09
* Thanks Peter Fodar for fixing misleading javadoc
* Fixed bug in computation of eigenvectors where the first eigenvalue was complex it would get messed up
** Thanks user343 for reporting the bug!
* Complex matrix multiplication
** added multTransA variants
** added multTransB variants
** added multTransAB variants
* Added the following complex decompositions
** Hessenberg Similar Decomposition
** Tridiagonal Similar Decomposition
* Added MatrixFeatures.isLowerTriangle()
* Added createLike() to all matrices
** Creates a new matrix that is the same size and shape. filled with zeros initially
* Fixed CRandomMatrices.createHermitian()
* Fixed CMatrixFeatures.isHermitian()
1ecc332b4bd1f0bc7ef02cd25ddf9571c55931b9
Frequently Asked Questions
0
4
109
36
2015-04-01T03:14:33Z
Peter
1
wikitext
text/x-wiki
#summary Frequently Asked Questions
= Frequently Asked Questions=
Here is a list of frequently asked questions about EJML. Most of these questions have been asked and answered several times already.
== Why does EJML crash when I try to process a very large matrix? ==
If you are working with large matrices first do a quick sanity check. Ask yourself, how much memory is that matrix using and can my computer physically store it? Compute the number of required gigabytes with the following equation:
memory in gigabytes = (columns * rows*8)/(1024*1024*1024)
Now take the number and multiply it by 3 or 4 to take in account overhead/working memory and that's about how much memory your system will need to do anything useful. This is true for ALL dense linear algebra libraries. EJML is also limited by the size of a Java array, which can have at most 2^32 elements. If you are lucky the system is sparse (mostly zeros) and there problem might actually be feasible using other libraries, see below.
The other potentially fatal problem is that very large matrices are very slow to process. So even if you have enough RAM on your computer the time to compute the solution could well exceed the lifetime of a typical human.
== Will EJML work on Android? ==
Yes EJML has been used for quite some time on Android. The library does include a tinny bit of swing code, which will not cause any problems as long as you do not call anything related to visualization. In Android Studio simply reference the latest jar on the Maven central repository. See [Download] for how to do that.
== Multi-Threaded ==
Currently EJML is entirely single threaded. The plan is to max out single threaded performance by finishing block algorithms implementations, then declare the library to be at version 1.0. After that has happened, start work on multi-threaded implementations. However, there is no schedule in place for when all this will happen.
The main driving factor for when major new features are added is when I personally need such a feature. I'm starting to work on larger scale machine learning problems, so there might be a need soon. Another way to speed up the process is to volunteer your time and help develop it.
== Sparse Matrix Support ==
EJML's current focus is on dense matrices, but could be extended in the future to support sparse matrices. In the mean time the following libraries do provide some support for sparse matrices. Note: I have not used any of these libraries personally with sparse matrices.
* [https://sites.google.com/site/piotrwendykier/software/csparsej CSparseJ]
* [http://la4j.org/ la4j]
* [https://github.com/fommil/matrix-toolkits-java MTJ]
== How do I do cross product? ==
Cross product and other geometric operations are outside of the scope of EJML. EJML is focused on linear algebra and does not aim to mirror tools such as Matlab.
== What version of Java? ==
EJML can be compiled with Java 1.6 and beyond. With a few minor modifications to the source code you can get it to compile with 1.5.
b93618c6cfdf81845e77787bb4347e1c35dbe299
207
109
2017-05-18T04:32:52Z
Peter
1
/* Sparse Matrix Support */
wikitext
text/x-wiki
#summary Frequently Asked Questions
= Frequently Asked Questions=
Here is a list of frequently asked questions about EJML. Most of these questions have been asked and answered several times already.
== Why does EJML crash when I try to process a very large matrix? ==
If you are working with large matrices first do a quick sanity check. Ask yourself, how much memory is that matrix using and can my computer physically store it? Compute the number of required gigabytes with the following equation:
memory in gigabytes = (columns * rows*8)/(1024*1024*1024)
Now take the number and multiply it by 3 or 4 to take in account overhead/working memory and that's about how much memory your system will need to do anything useful. This is true for ALL dense linear algebra libraries. EJML is also limited by the size of a Java array, which can have at most 2^32 elements. If you are lucky the system is sparse (mostly zeros) and there problem might actually be feasible using other libraries, see below.
The other potentially fatal problem is that very large matrices are very slow to process. So even if you have enough RAM on your computer the time to compute the solution could well exceed the lifetime of a typical human.
== Will EJML work on Android? ==
Yes EJML has been used for quite some time on Android. The library does include a tinny bit of swing code, which will not cause any problems as long as you do not call anything related to visualization. In Android Studio simply reference the latest jar on the Maven central repository. See [Download] for how to do that.
== Multi-Threaded ==
Currently EJML is entirely single threaded. The plan is to max out single threaded performance by finishing block algorithms implementations, then declare the library to be at version 1.0. After that has happened, start work on multi-threaded implementations. However, there is no schedule in place for when all this will happen.
The main driving factor for when major new features are added is when I personally need such a feature. I'm starting to work on larger scale machine learning problems, so there might be a need soon. Another way to speed up the process is to volunteer your time and help develop it.
== Sparse Matrix Support ==
EJML's is in the early stages of adding sparse matrix support. Currently only basic operations are supported and no decompositions. In the mean time the following libraries do provide some support for sparse matrices. Note: I have not used any of these libraries personally with sparse matrices.
* [https://sites.google.com/site/piotrwendykier/software/csparsej CSparseJ]
* [http://la4j.org/ la4j]
* [https://github.com/fommil/matrix-toolkits-java MTJ]
== How do I do cross product? ==
Cross product and other geometric operations are outside of the scope of EJML. EJML is focused on linear algebra and does not aim to mirror tools such as Matlab.
== What version of Java? ==
EJML can be compiled with Java 1.6 and beyond. With a few minor modifications to the source code you can get it to compile with 1.5.
0b253edc0dac6dee9c179263501672fde3679cd5
Manual
0
8
111
81
2015-04-01T03:26:03Z
Peter
1
wikitext
text/x-wiki
= The Basics =
Efficient Java Matrix Library (EJML) is a Java library for performing standard linear algebra operations on dense matrices. Typically the list of standard operations is divided up unto basic (addition, subtraction, multiplication, ...etc), decompositions (LU, QR, SVD, ... etc), and solving linear systems. A complete list of its core functionality can be found on the [[Capabilities]] page.
This manual describes how to use and develop an application using EJML. Other questions, like how to build or include it in your project, is provided in the list below. If you have a question which isn't answered or is confusion feel free to post a question on the message board! Instructions on how to use EJML is primarily done in this manual through example, see below. The examples are selected from common real-world problems, such as Kalman filters. Some times the same example is provided in three different formats using one of the three interfaces provided in EJML to help you understand the differences.
* [[Download|Download and Building]]
* [[Frequently Asked Questions|Frequently Asked Questions]]
* [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
EJML is compatible with Java 1.6 and beyond.
== The Interfaces ==
A primary design goal of EJML was to provide users the capability to write both highly optimized code and easy to read/write code. Since it's hard to do this with a single API BoofCV provides three different ways to interact with it.
* [[Procedural]]: You have full access to all of EJML's capabilities, can select individual algorithms, and almost complete control over memory. The downside is it feels a bit like you're programming in assembly and it's tedious to have that much control over memory.
* [[SimpleMatrix]]: An object oriented API that allows you to connect multiple operations together using a flow strategy, which is much easier to read and write. Limited subset of operations are supported and memory is constantly created and destroyed.
* [[Equations]]: Is a symbolic interface that allows you to manipulate matrices in a similar manor to Matlab/Octave. Can be precompiled and won't declare new memory if the input size doesn't change. It's a bit of a black box and the compiler isn't smart enough to pick the most efficient functions.
Example of compute the Kalman gain "K"
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
It's hard to say which interface is the best. If you are dealing with small matrices and need to write highly optimized code then ''Procedural'' is the way to go. For large matrices it doesn't really matter which one you use since the overhead is insignificant compared to the matrix operations. If you want to write something quickly then [[SimpleMatrix]] or [[Equations]] is the way to go. For those of you who are concerned about performance, I recommend coding it up first using SimpleMatrix or Equations then benchmarking to see if that code is a bottleneck. Much easier to debug that way.
[[Performance|Comparison of Interface Runtime Performance]]
= Tutorials =
* [[Matlab to EJML|Matlab to EJML]]
* [[Tutorial Complex|Complex]]
* [[Solving Linear Systems|Solving Linear Systems]]
* [[Matrix Decompositions|Matrix Decompositions]]
* [[Random matrices, Matrix Features, and Matrix Norms]]
* [[Extract and Insert|Extracting and Inserting submatrices and vectors]]
* [[Input and Output|Matrix Input/Output]]
* [[Unit Testing]]
= Example Code =
The follow are code examples of common linear algebra problems intended to demonstrate different parts of EJML. In the table below it indicates which interface or interfaces the example uses.
{| class="wikitable" border="1" |
! Name !! Procedural !! SimpleMatrix !! Equations
|-
| [[Example Kalman Filter|Kalman Filter]] || X || X || X
|-
| [[Example Levenberg-Marquardt|Levenberg-Marquardt]] || X || ||
|-
| [[Example Principal Component Analysis|Principal Component Analysis]] || X || ||
|-
| [[Example Polynomial Fitting|Polynomial Fitting]] || X || ||
|-
| [[Example Polynomial Roots|Polynomial Roots]] || X || ||
|-
| [[Example Customizing Equations|Customizing Equations]] || || || X
|-
| [[Example Customizing SimpleMatrix|Customizing SimpleMatrix]] || || X ||
|-
| [[Example Fixed Sized Matrices|Fixed Sized Matrices]] || X || ||
|-
| [[Example Complex Math|Complex Math]] || X || ||
|-
| [[Example Complex Matrices|Complex Matrices]] || X || ||
|}
= External References =
Want to learn more about how EJML works to write more effective code and employ more advanced techniques? Understand where EJML's logo comes from? The following books are recommended reading and made EJML's early development possible.
* Best introduction to the subject that balances clarity without sacrificing important implementation details:
** [http://www.amazon.com/gp/product/0470528338/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0470528338 Fundamentals of Matrix Computations by David S. Watkins]
* Classic reference book that tersely covers hundreds of algorithms
** [http://www.amazon.com/gp/product/0801854148/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0801854148 Matrix Computations by G. Golub and C. Van Loan]
* Popular book on linear algebra
** [http://www.amazon.com/gp/product/0030105676/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0030105676 Linear Algebra and Its Applications by Gilbert Strang]
Purchasing through these links will help EJML's developer buy high end ramen noodles.
474934b8f95fe2aa56c1340a5ab37341b652e4d5
163
111
2016-12-21T22:24:04Z
Peter
1
wikitext
text/x-wiki
= The Basics =
Efficient Java Matrix Library (EJML) is a Java library for performing standard linear algebra operations on dense matrices. Typically the list of standard operations is divided up unto basic (addition, subtraction, multiplication, ...etc), decompositions (LU, QR, SVD, ... etc), and solving linear systems. A complete list of its core functionality can be found on the [[Capabilities]] page.
This manual describes how to use and develop an application using EJML. Other questions, like how to build or include it in your project, is provided in the list below. If you have a question which isn't answered or is confusion feel free to post a question on the message board! Instructions on how to use EJML is primarily done in this manual through example, see below. The examples are selected from common real-world problems, such as Kalman filters. Some times the same example is provided in three different formats using one of the three interfaces provided in EJML to help you understand the differences.
* [[Download|Download and Building]]
* [[Frequently Asked Questions|Frequently Asked Questions]]
* [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
EJML is compatible with Java 1.6 and beyond.
== The Interfaces ==
A primary design goal of EJML was to provide users the capability to write both highly optimized code and easy to read/write code. Since it's hard to do this with a single API BoofCV provides three different ways to interact with it.
* [[Procedural]]: You have full access to all of EJML's capabilities, can select individual algorithms, and almost complete control over memory. The downside is it feels a bit like you're programming in assembly and it's tedious to have that much control over memory.
* [[SimpleMatrix]]: An object oriented API that allows you to connect multiple operations together using a flow strategy, which is much easier to read and write. Limited subset of operations are supported and memory is constantly created and destroyed.
* [[Equations]]: Is a symbolic interface that allows you to manipulate matrices in a similar manor to Matlab/Octave. Can be precompiled and won't declare new memory if the input size doesn't change. It's a bit of a black box and the compiler isn't smart enough to pick the most efficient functions.
Example of compute the Kalman gain "K"
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
It's hard to say which interface is the best. If you are dealing with small matrices and need to write highly optimized code then ''Procedural'' is the way to go. For large matrices it doesn't really matter which one you use since the overhead is insignificant compared to the matrix operations. If you want to write something quickly then [[SimpleMatrix]] or [[Equations]] is the way to go. For those of you who are concerned about performance, I recommend coding it up first using SimpleMatrix or Equations then benchmarking to see if that code is a bottleneck. Much easier to debug that way.
[[Performance|Comparison of Interface Runtime Performance]]
= Tutorials =
* [[Matlab to EJML|Matlab to EJML]]
* [[Tutorial Complex|Complex]]
* [[Solving Linear Systems|Solving Linear Systems]]
* [[Matrix Decompositions|Matrix Decompositions]]
* [[Random matrices, Matrix Features, and Matrix Norms]]
* [[Extract and Insert|Extracting and Inserting submatrices and vectors]]
* [[Input and Output|Matrix Input/Output]]
* [[Unit Testing]]
= Example Code =
The follow are code examples of common linear algebra problems intended to demonstrate different parts of EJML. In the table below it indicates which interface or interfaces the example uses.
{| class="wikitable" border="1" |
! Name !! Procedural !! SimpleMatrix !! Equations
|-
| [[Example Kalman Filter|Kalman Filter]] || X || X || X
|-
| [[Example Levenberg-Marquardt|Levenberg-Marquardt]] || X || ||
|-
| [[Example Principal Component Analysis|Principal Component Analysis]] || X || ||
|-
| [[Example Polynomial Fitting|Polynomial Fitting]] || X || ||
|-
| [[Example Polynomial Roots|Polynomial Roots]] || X || ||
|-
| [[Example Customizing Equations|Customizing Equations]] || || || X
|-
| [[Example Customizing SimpleMatrix|Customizing SimpleMatrix]] || || X ||
|-
| [[Example Fixed Sized Matrices|Fixed Sized Matrices]] || X || ||
|-
| [[Example Complex Math|Complex Math]] || X || ||
|-
| [[Example Complex Matrices|Complex Matrices]] || X || ||
|}
= External References =
Want to learn more about how EJML works to write more effective code and employ more advanced techniques? Understand where EJML's logo comes from? The following books are recommended reading and made EJML's early development possible.
* Best introduction to the subject that balances clarity without sacrificing important implementation details:
** [http://amzn.to/2hWEo8N Fundamentals of Matrix Computations by David S. Watkins]
* Classic reference book that tersely covers hundreds of algorithms
** [http://amzn.to/2h3apra Matrix Computations by G. Golub and C. Van Loan]
* Popular book on linear algebra
** [http://amzn.to/2hbeGMG6 Linear Algebra and Its Applications by Gilbert Strang]
Purchasing through these links will help EJML's developer buy high end ramen noodles.
adbfdc86cdd5623301be1f12da43cd9f0534b4b0
Matlab to EJML
0
9
114
29
2015-04-04T03:47:10Z
Peter
1
wikitext
text/x-wiki
To help Matlab users quickly learn how to use EJML a list of equivalent functions is provided in the sections below. Keep in mind that directly porting Matlab code will often result in inefficient code. In Matlab for loops are very expensive and often extracting sub-matrices is the preferred method. Java like C++ can handle for loops much better and extracting and inserting a matrix can be much less efficient than direct manipulation of the matrix itself.
= Equations =
If you're a Matlab user you seriously might want to consider using the [[Equations]] interface in EJML. It is similar to Matlab and can be mixed with the other interfaces.
<syntaxHighlight lang="java">
eq.process("[A(5:10,:) , ones(5,5)] .* normF(B) \ C")
</syntaxHighlight>
That equation would be horrendous to implement using SimpleMatrix or the operations interface. Take a look at the [[Equations|Equations tutorial]] to learn more.
= SimpleMatrix =
A subset of EJML's functionality is provided in [[SimpleMatrix]]. If SimpleMatrix does not provide the functionality you desire then look at the list of [[#Procedural]] functions below.
{| class="wikitable" align="center" style="font-size:120%; text-align:left; border-collapse:collapse;"
|-
! Matlab !! SimpleMatrix
|-
| eye(3) || SimpleMatrix.identity(3)
|-
| diag([1 2 3]) || SimpleMatrix.diag(1,2,3)
|-
| C(1,2) = 5 || A.set(0,1,5)
|-
| C(:) = A || C.set(A)
|-
| C(:) = 5 || C.set(5)
|-
| C(2,:) = [1,2,3] || C.setRow(1,0,1,2,3)
|-
| C(:,2) = [1,2,3] || C.setColumn(1,0,1,2,3)
|-
| C = A(2:4,3:8) || C = A.extractMatrix(1,4,2,8)
|-
| A(:,2:end) = B || A.insertIntoThis(0,1,B);
|-
| C = diag(A) || C = A.extractDiag()
|-
| C = [A,B] || C = A.combine(0,A.numCols(),B)
|-
| C = A' || C = A.transpose()
|-
| C = -A || C = A.negative()
|-
| C = A*B || C = A.mult(B)
|-
| C = A + B || C = A.plus(B)
|-
| C = A - B || C = A.minus(B)
|-
| C = 2*A || C = A.scale(2)
|-
| C = A / 2 || C = A.divide(2)
|-
| C = inv(A) || C = A.invert()
|-
| C = pinv(A) || C = A.pinv()
|-
| C = A \ B || C = A.solve(B)
|-
| C = trace(A) || C = A.trace()
|-
| det(A) || A.det()
|-
| C=kron(A,B) || C=A.kron(B)
|-
| norm(A,"fro") || A.normf()
|-
| max(abs(A(:))) || A.elementMaxAbs()
|-
| sum(A(:)) || A.elementSum()
|-
| rank(A) || A.svd(true).rank()
|-
| [U,S,V] = svd(A) || A.svd(false)
|-
| [U,S,V] = svd(A,0) || A.svd(true)
|-
| [V,L] = eig(A) || A.eig()
|}
= Procedural =
Functions and classes in the procedural interface use DenseMatrix64F as input. Since SimpleMatrix is a wrapper around DenseMatrix64F its internal matrix can be extracted and passed into any of these functions.
{| class="wikitable" align="center" style="font-size:120%; text-align:left; border-collapse:collapse;"
|-
! Matlab !! Procedural
|-
| eye(3) || CommonOps.identity(3)
|-
| C(1,2) = 5 || A.set(0,1,5)
|-
| C(:) = A || C.setTo(A)
|-
| C(2,:) = [1,2,3] || CommonOps.insert(new DenseMatrix64F(1,3,true,1,2,3),C,1,0)
|-
| C(:,2) = [1,2,3] || CommonOps.insert(new DenseMatrix64F(3,1,true,1,2,3),C,0,1)
|-
| C = A(2:4,3:8) || CommonOps.extract(A,1,4,2,8)
|-
| diag([1 2 3]) || CommonOps.diag(1,2,3)
|-
| C = A' || CommonOps.transpose(A,C)
|-
| A = A' || CommonOps.transpose(A)
|-
| A = -A || CommonOps.changeSign(A)
|-
| C = A * B || CommonOps.mult(A,B,C)
|-
| C = A .* B || CommonOps.elementMult(A,B,C)
|-
| A = A .* B || CommonOps.elementMult(A,B)
|-
| C = A ./ B || CommonOps.elementDiv(A,B,C)
|-
| A = A ./ B || CommonOps.elementDiv(A,B)
|-
| C = A + B || CommonOps.add(A,B,C)
|-
| C = A - B || CommonOps.sub(A,B,C)
|-
| C = 2 * A || CommonOps.scale(2,A,C)
|-
| A = 2 * A || CommonOps.scale(2,A)
|-
| C = A / 2 || CommonOps.divide(2,A,C)
|-
| A = A / 2 || CommonOps.divide(2,A)
|-
| C = inv(A) || CommonOps.invert(A,C)
|-
| A = inv(A) || CommonOps.invert(A)
|-
| C = pinv(A) || CommonOps.pinv(A)
|-
| C = trace(A) || C = CommonOps.trace(A)
|-
| C = det(A) || C = CommonOps.det(A)
|-
| C=kron(A,B) || CommonOps.kron(A,B,C)
|-
| B=rref(A) || B = CommonOps.rref(A,-1,null)
|-
| norm(A,"fro") || NormOps.normf(A)
|-
| norm(A,1) || NormOps.normP1(A)
|-
| norm(A,2) || NormOps.normP2(A)
|-
| norm(A,Inf) || NormOps.normPInf(A)
|-
| max(abs(A(:))) || CommonOps.elementMaxAbs(A)
|-
| sum(A(:)) || CommonOps.elementSum(A)
|-
| rank(A,tol) || svd.decompose(A); SingularOps.rank(svd,tol)
|-
| [U,S,V] = svd(A) || DecompositionFactory.svd(A.numRows,A.numCols,true,true,false)
|-
| || SingularOps.descendingOrder(U,false,S,V,false)
|-
| [U,S,V] = svd(A,0) || DecompositionFactory.svd(A.numRows,A.numCols,true,true,true)
|-
| || SingularOps.descendingOrder(U,false,S,V,false)
|-
| S = svd(A) || DecompositionFactory.svd(A.numRows,A.numCols,false,false,true)
|-
| [V,D] = eig(A) || eig = DecompositionFactory.eig(A.numCols); eig.decompose(A)
|-
| || V = EigenOps.createMatrixV(eig); D = EigenOps.createMatrixD(eig)
|-
| [Q,R] = qr(A) || decomp = DecompositionFactory.qr(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,false); R = decomp.getR(null,false)
|-
| [Q,R] = qr(A,0) || decomp = DecompositionFactory.qr(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,true); R = decomp.getR(null,true)
|-
| [Q,R,P] = qr(A) || decomp = DecompositionFactory.qrp(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,false); R = decomp.getR(null,false)
|-
| || P = decomp.getPivotMatrix(null)
|-
| [Q,R,P] = qr(A,0) || decomp = DecompositionFactory.qrp(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,true); R = decomp.getR(null,true)
|-
| || P = decomp.getPivotMatrix(null)
|-
| R = chol(A) || DecompositionFactory.chol(A.numCols,false)
|-
| [L,U,P] = lu(A) ||DecompositionFactory.lu(A.numCols)
|}
c3fad4b215ed21683c3ba0165301f54e7d0738ca
MediaWiki:Sidebar
8
11
121
34
2015-08-09T19:03:31Z
Peter
1
wikitext
text/x-wiki
* navigation
** mainpage|mainpage-description
** Manual|Manual
** http://ejml.org/javadoc/ |JavaDoc
** Download|Download
** recentchanges-url|recentchanges
** randompage-url|randompage
** helppage|help
* SEARCH
* TOOLBOX
* LANGUAGES
3fbc8e57e0a1f96b799661f87c7cc66743912c5b
Example Kalman Filter
0
10
122
96
2015-08-10T00:48:27Z
Peter
1
wikitext
text/x-wiki
Here are three examples that demonstrate how a [http://en.wikipedia.org/wiki/Kalman_filter Kalman filter] can be created using different API's in EJML. Each API has different advantages and disadvantages. High level interfaces tend to be easier to use, but sacrifice efficiency. The intent of this article is to illustrate this trend empirically. Runtime performance of each approach is shown below. To see how complex and readable each approach is check out the source code below.
<center>
{| class="wikitable"
! API !! Execution Time (ms)
|-
| SimpleMatrix || 1875
|-
| Procedural || 1280
|-
| Equations || 1698
|}
</center>
__TOC__
Code on GitHub:
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/KalmanFilterSimple.java KalmanFilterSimple]
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/KalmanFilterOperations.java KalmanFilterProcedural]
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/KalmanFilterEquation.java KalmanFilterEquation]
----
'''NOTE:''' While the Kalman filter code below is fully functional and will work well in most applications, it might not be the best. Other variants seek to improve stability and/or avoid the matrix inversion. It's worth point out that some people say you should never invert the matrix in a Kalman filter. There are applications, such as target tracking, where matrix inversion of the innovation covariance is helpful as a preprocessing step.
== SimpleMatrix Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter implemented using SimpleMatrix. The code tends to be easier to
* read and write, but the performance is degraded due to excessive creation/destruction of
* memory and the use of more generic algorithms. This also demonstrates how code can be
* seamlessly implemented using both SimpleMatrix and DenseMatrix64F. This allows code
* to be quickly prototyped or to be written either by novices or experts.
*
* @author Peter Abeles
*/
public class KalmanFilterSimple implements KalmanFilter{
// kinematics description
private SimpleMatrix F,Q,H;
// sytem state estimate
private SimpleMatrix x,P;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
this.F = new SimpleMatrix(F);
this.Q = new SimpleMatrix(Q);
this.H = new SimpleMatrix(H);
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x = new SimpleMatrix(x);
this.P = new SimpleMatrix(P);
}
@Override
public void predict() {
// x = F x
x = F.mult(x);
// P = F P F' + Q
P = F.mult(P).mult(F.transpose()).plus(Q);
}
@Override
public void update(DenseMatrix64F _z, DenseMatrix64F _R) {
// a fast way to make the matrices usable by SimpleMatrix
SimpleMatrix z = SimpleMatrix.wrap(_z);
SimpleMatrix R = SimpleMatrix.wrap(_R);
// y = z - H x
SimpleMatrix y = z.minus(H.mult(x));
// S = H P H' + R
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
// K = PH'S^(-1)
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
// x = x + Ky
x = x.plus(K.mult(y));
// P = (I-kH)P = P - KHP
P = P.minus(K.mult(H).mult(P));
}
@Override
public DenseMatrix64F getState() {
return x.getMatrix();
}
@Override
public DenseMatrix64F getCovariance() {
return P.getMatrix();
}
}
</syntaxhighlight>
== Procedural Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter that is implemented using the operations API, which is procedural. Much of the excessive
* memory creation/destruction has been reduced from the KalmanFilterSimple. A specialized solver is
* under to invert the SPD matrix.
*
* @author Peter Abeles
*/
public class KalmanFilterOperations implements KalmanFilter{
// kinematics description
private DenseMatrix64F F,Q,H;
// system state estimate
private DenseMatrix64F x,P;
// these are predeclared for efficiency reasons
private DenseMatrix64F a,b;
private DenseMatrix64F y,S,S_inv,c,d;
private DenseMatrix64F K;
private LinearSolver<DenseMatrix64F> solver;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
this.F = F;
this.Q = Q;
this.H = H;
int dimenX = F.numCols;
int dimenZ = H.numRows;
a = new DenseMatrix64F(dimenX,1);
b = new DenseMatrix64F(dimenX,dimenX);
y = new DenseMatrix64F(dimenZ,1);
S = new DenseMatrix64F(dimenZ,dimenZ);
S_inv = new DenseMatrix64F(dimenZ,dimenZ);
c = new DenseMatrix64F(dimenZ,dimenX);
d = new DenseMatrix64F(dimenX,dimenZ);
K = new DenseMatrix64F(dimenX,dimenZ);
x = new DenseMatrix64F(dimenX,1);
P = new DenseMatrix64F(dimenX,dimenX);
// covariance matrices are symmetric positive semi-definite
solver = LinearSolverFactory.symmPosDef(dimenX);
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x.set(x);
this.P.set(P);
}
@Override
public void predict() {
// x = F x
mult(F,x,a);
x.set(a);
// P = F P F' + Q
mult(F,P,b);
multTransB(b,F, P);
addEquals(P,Q);
}
@Override
public void update(DenseMatrix64F z, DenseMatrix64F R) {
// y = z - H x
mult(H,x,y);
subtract(z, y, y);
// S = H P H' + R
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
// K = PH'S^(-1)
if( !solver.setA(S) ) throw new RuntimeException("Invert failed");
solver.invert(S_inv);
multTransA(H,S_inv,d);
mult(P,d,K);
// x = x + Ky
mult(K,y,a);
addEquals(x,a);
// P = (I-kH)P = P - (KH)P = P-K(HP)
mult(H,P,c);
mult(K,c,b);
subtractEquals(P, b);
}
@Override
public DenseMatrix64F getState() {
return x;
}
@Override
public DenseMatrix64F getCovariance() {
return P;
}
}
</syntaxhighlight>
== Equations Example ==
<syntaxhighlight lang="java">
/**
* Example of how the equation interface can greatly simplify code
*
* @author Peter Abeles
*/
public class KalmanFilterEquation implements KalmanFilter{
// system state estimate
private DenseMatrix64F x,P;
private Equation eq;
// Storage for precompiled code for predict and update
Sequence predictX,predictP;
Sequence updateY,updateK,updateX,updateP;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
int dimenX = F.numCols;
x = new DenseMatrix64F(dimenX,1);
P = new DenseMatrix64F(dimenX,dimenX);
eq = new Equation();
// Provide aliases between the symbolic variables and matrices we normally interact with
// The names do not have to be the same.
eq.alias(x,"x",P,"P",Q,"Q",F,"F",H,"H");
// Dummy matrix place holder to avoid compiler errors. Will be replaced later on
eq.alias(new DenseMatrix64F(1,1),"z");
eq.alias(new DenseMatrix64F(1,1),"R");
// Pre-compile so that it doesn't have to compile it each time it's invoked. More cumbersome
// but for small matrices the overhead is significant
predictX = eq.compile("x = F*x");
predictP = eq.compile("P = F*P*F' + Q");
updateY = eq.compile("y = z - H*x");
updateK = eq.compile("K = P*H'*inv( H*P*H' + R )");
updateX = eq.compile("x = x + K*y");
updateP = eq.compile("P = P-K*(H*P)");
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x.set(x);
this.P.set(P);
}
@Override
public void predict() {
predictX.perform();
predictP.perform();
}
@Override
public void update(DenseMatrix64F z, DenseMatrix64F R) {
// Alias will overwrite the reference to the previous matrices with the same name
eq.alias(z,"z"); eq.alias(R,"R");
updateY.perform();
updateK.perform();
updateX.perform();
updateP.perform();
}
@Override
public DenseMatrix64F getState() {
return x;
}
@Override
public DenseMatrix64F getCovariance() {
return P;
}
}
</syntaxhighlight>
<disqus>EJML Examples</disqus>
3fe6c2281f0c35e4bfebc1d771afab78ce7cecd0
123
122
2015-08-10T00:54:49Z
Peter
1
wikitext
text/x-wiki
Here are three examples that demonstrate how a [http://en.wikipedia.org/wiki/Kalman_filter Kalman filter] can be created using different API's in EJML. Each API has different advantages and disadvantages. High level interfaces tend to be easier to use, but sacrifice efficiency. The intent of this article is to illustrate this trend empirically. Runtime performance of each approach is shown below. To see how complex and readable each approach is check out the source code below.
<center>
{| class="wikitable"
! API !! Execution Time (ms)
|-
| SimpleMatrix || 1875
|-
| Procedural || 1280
|-
| Equations || 1698
|}
</center>
__TOC__
<disqus>Click to Discuss This Example!</disqus>
Code on GitHub:
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/KalmanFilterSimple.java KalmanFilterSimple]
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/KalmanFilterOperations.java KalmanFilterProcedural]
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/KalmanFilterEquation.java KalmanFilterEquation]
----
'''NOTE:''' While the Kalman filter code below is fully functional and will work well in most applications, it might not be the best. Other variants seek to improve stability and/or avoid the matrix inversion. It's worth point out that some people say you should never invert the matrix in a Kalman filter. There are applications, such as target tracking, where matrix inversion of the innovation covariance is helpful as a preprocessing step.
== SimpleMatrix Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter implemented using SimpleMatrix. The code tends to be easier to
* read and write, but the performance is degraded due to excessive creation/destruction of
* memory and the use of more generic algorithms. This also demonstrates how code can be
* seamlessly implemented using both SimpleMatrix and DenseMatrix64F. This allows code
* to be quickly prototyped or to be written either by novices or experts.
*
* @author Peter Abeles
*/
public class KalmanFilterSimple implements KalmanFilter{
// kinematics description
private SimpleMatrix F,Q,H;
// sytem state estimate
private SimpleMatrix x,P;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
this.F = new SimpleMatrix(F);
this.Q = new SimpleMatrix(Q);
this.H = new SimpleMatrix(H);
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x = new SimpleMatrix(x);
this.P = new SimpleMatrix(P);
}
@Override
public void predict() {
// x = F x
x = F.mult(x);
// P = F P F' + Q
P = F.mult(P).mult(F.transpose()).plus(Q);
}
@Override
public void update(DenseMatrix64F _z, DenseMatrix64F _R) {
// a fast way to make the matrices usable by SimpleMatrix
SimpleMatrix z = SimpleMatrix.wrap(_z);
SimpleMatrix R = SimpleMatrix.wrap(_R);
// y = z - H x
SimpleMatrix y = z.minus(H.mult(x));
// S = H P H' + R
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
// K = PH'S^(-1)
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
// x = x + Ky
x = x.plus(K.mult(y));
// P = (I-kH)P = P - KHP
P = P.minus(K.mult(H).mult(P));
}
@Override
public DenseMatrix64F getState() {
return x.getMatrix();
}
@Override
public DenseMatrix64F getCovariance() {
return P.getMatrix();
}
}
</syntaxhighlight>
== Procedural Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter that is implemented using the operations API, which is procedural. Much of the excessive
* memory creation/destruction has been reduced from the KalmanFilterSimple. A specialized solver is
* under to invert the SPD matrix.
*
* @author Peter Abeles
*/
public class KalmanFilterOperations implements KalmanFilter{
// kinematics description
private DenseMatrix64F F,Q,H;
// system state estimate
private DenseMatrix64F x,P;
// these are predeclared for efficiency reasons
private DenseMatrix64F a,b;
private DenseMatrix64F y,S,S_inv,c,d;
private DenseMatrix64F K;
private LinearSolver<DenseMatrix64F> solver;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
this.F = F;
this.Q = Q;
this.H = H;
int dimenX = F.numCols;
int dimenZ = H.numRows;
a = new DenseMatrix64F(dimenX,1);
b = new DenseMatrix64F(dimenX,dimenX);
y = new DenseMatrix64F(dimenZ,1);
S = new DenseMatrix64F(dimenZ,dimenZ);
S_inv = new DenseMatrix64F(dimenZ,dimenZ);
c = new DenseMatrix64F(dimenZ,dimenX);
d = new DenseMatrix64F(dimenX,dimenZ);
K = new DenseMatrix64F(dimenX,dimenZ);
x = new DenseMatrix64F(dimenX,1);
P = new DenseMatrix64F(dimenX,dimenX);
// covariance matrices are symmetric positive semi-definite
solver = LinearSolverFactory.symmPosDef(dimenX);
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x.set(x);
this.P.set(P);
}
@Override
public void predict() {
// x = F x
mult(F,x,a);
x.set(a);
// P = F P F' + Q
mult(F,P,b);
multTransB(b,F, P);
addEquals(P,Q);
}
@Override
public void update(DenseMatrix64F z, DenseMatrix64F R) {
// y = z - H x
mult(H,x,y);
subtract(z, y, y);
// S = H P H' + R
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
// K = PH'S^(-1)
if( !solver.setA(S) ) throw new RuntimeException("Invert failed");
solver.invert(S_inv);
multTransA(H,S_inv,d);
mult(P,d,K);
// x = x + Ky
mult(K,y,a);
addEquals(x,a);
// P = (I-kH)P = P - (KH)P = P-K(HP)
mult(H,P,c);
mult(K,c,b);
subtractEquals(P, b);
}
@Override
public DenseMatrix64F getState() {
return x;
}
@Override
public DenseMatrix64F getCovariance() {
return P;
}
}
</syntaxhighlight>
== Equations Example ==
<syntaxhighlight lang="java">
/**
* Example of how the equation interface can greatly simplify code
*
* @author Peter Abeles
*/
public class KalmanFilterEquation implements KalmanFilter{
// system state estimate
private DenseMatrix64F x,P;
private Equation eq;
// Storage for precompiled code for predict and update
Sequence predictX,predictP;
Sequence updateY,updateK,updateX,updateP;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
int dimenX = F.numCols;
x = new DenseMatrix64F(dimenX,1);
P = new DenseMatrix64F(dimenX,dimenX);
eq = new Equation();
// Provide aliases between the symbolic variables and matrices we normally interact with
// The names do not have to be the same.
eq.alias(x,"x",P,"P",Q,"Q",F,"F",H,"H");
// Dummy matrix place holder to avoid compiler errors. Will be replaced later on
eq.alias(new DenseMatrix64F(1,1),"z");
eq.alias(new DenseMatrix64F(1,1),"R");
// Pre-compile so that it doesn't have to compile it each time it's invoked. More cumbersome
// but for small matrices the overhead is significant
predictX = eq.compile("x = F*x");
predictP = eq.compile("P = F*P*F' + Q");
updateY = eq.compile("y = z - H*x");
updateK = eq.compile("K = P*H'*inv( H*P*H' + R )");
updateX = eq.compile("x = x + K*y");
updateP = eq.compile("P = P-K*(H*P)");
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x.set(x);
this.P.set(P);
}
@Override
public void predict() {
predictX.perform();
predictP.perform();
}
@Override
public void update(DenseMatrix64F z, DenseMatrix64F R) {
// Alias will overwrite the reference to the previous matrices with the same name
eq.alias(z,"z"); eq.alias(R,"R");
updateY.perform();
updateK.perform();
updateX.perform();
updateP.perform();
}
@Override
public DenseMatrix64F getState() {
return x;
}
@Override
public DenseMatrix64F getCovariance() {
return P;
}
}
</syntaxhighlight>
ad47b0a98e4257add1fa7ca4f3f734de32320d0e
124
123
2015-08-10T00:56:52Z
Peter
1
wikitext
text/x-wiki
Here are three examples that demonstrate how a [http://en.wikipedia.org/wiki/Kalman_filter Kalman filter] can be created using different API's in EJML. Each API has different advantages and disadvantages. High level interfaces tend to be easier to use, but sacrifice efficiency. The intent of this article is to illustrate this trend empirically. Runtime performance of each approach is shown below. To see how complex and readable each approach is check out the source code below.
<center>
{| class="wikitable"
! API !! Execution Time (ms)
|-
| SimpleMatrix || 1875
|-
| Procedural || 1280
|-
| Equations || 1698
|}
</center>
__TOC__
Code on GitHub:
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/KalmanFilterSimple.java KalmanFilterSimple]
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/KalmanFilterOperations.java KalmanFilterProcedural]
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/KalmanFilterEquation.java KalmanFilterEquation]
Community:
* <disqus>Discuss this example</disqus>
----
'''NOTE:''' While the Kalman filter code below is fully functional and will work well in most applications, it might not be the best. Other variants seek to improve stability and/or avoid the matrix inversion. It's worth point out that some people say you should never invert the matrix in a Kalman filter. There are applications, such as target tracking, where matrix inversion of the innovation covariance is helpful as a preprocessing step.
== SimpleMatrix Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter implemented using SimpleMatrix. The code tends to be easier to
* read and write, but the performance is degraded due to excessive creation/destruction of
* memory and the use of more generic algorithms. This also demonstrates how code can be
* seamlessly implemented using both SimpleMatrix and DenseMatrix64F. This allows code
* to be quickly prototyped or to be written either by novices or experts.
*
* @author Peter Abeles
*/
public class KalmanFilterSimple implements KalmanFilter{
// kinematics description
private SimpleMatrix F,Q,H;
// sytem state estimate
private SimpleMatrix x,P;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
this.F = new SimpleMatrix(F);
this.Q = new SimpleMatrix(Q);
this.H = new SimpleMatrix(H);
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x = new SimpleMatrix(x);
this.P = new SimpleMatrix(P);
}
@Override
public void predict() {
// x = F x
x = F.mult(x);
// P = F P F' + Q
P = F.mult(P).mult(F.transpose()).plus(Q);
}
@Override
public void update(DenseMatrix64F _z, DenseMatrix64F _R) {
// a fast way to make the matrices usable by SimpleMatrix
SimpleMatrix z = SimpleMatrix.wrap(_z);
SimpleMatrix R = SimpleMatrix.wrap(_R);
// y = z - H x
SimpleMatrix y = z.minus(H.mult(x));
// S = H P H' + R
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
// K = PH'S^(-1)
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
// x = x + Ky
x = x.plus(K.mult(y));
// P = (I-kH)P = P - KHP
P = P.minus(K.mult(H).mult(P));
}
@Override
public DenseMatrix64F getState() {
return x.getMatrix();
}
@Override
public DenseMatrix64F getCovariance() {
return P.getMatrix();
}
}
</syntaxhighlight>
== Procedural Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter that is implemented using the operations API, which is procedural. Much of the excessive
* memory creation/destruction has been reduced from the KalmanFilterSimple. A specialized solver is
* under to invert the SPD matrix.
*
* @author Peter Abeles
*/
public class KalmanFilterOperations implements KalmanFilter{
// kinematics description
private DenseMatrix64F F,Q,H;
// system state estimate
private DenseMatrix64F x,P;
// these are predeclared for efficiency reasons
private DenseMatrix64F a,b;
private DenseMatrix64F y,S,S_inv,c,d;
private DenseMatrix64F K;
private LinearSolver<DenseMatrix64F> solver;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
this.F = F;
this.Q = Q;
this.H = H;
int dimenX = F.numCols;
int dimenZ = H.numRows;
a = new DenseMatrix64F(dimenX,1);
b = new DenseMatrix64F(dimenX,dimenX);
y = new DenseMatrix64F(dimenZ,1);
S = new DenseMatrix64F(dimenZ,dimenZ);
S_inv = new DenseMatrix64F(dimenZ,dimenZ);
c = new DenseMatrix64F(dimenZ,dimenX);
d = new DenseMatrix64F(dimenX,dimenZ);
K = new DenseMatrix64F(dimenX,dimenZ);
x = new DenseMatrix64F(dimenX,1);
P = new DenseMatrix64F(dimenX,dimenX);
// covariance matrices are symmetric positive semi-definite
solver = LinearSolverFactory.symmPosDef(dimenX);
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x.set(x);
this.P.set(P);
}
@Override
public void predict() {
// x = F x
mult(F,x,a);
x.set(a);
// P = F P F' + Q
mult(F,P,b);
multTransB(b,F, P);
addEquals(P,Q);
}
@Override
public void update(DenseMatrix64F z, DenseMatrix64F R) {
// y = z - H x
mult(H,x,y);
subtract(z, y, y);
// S = H P H' + R
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
// K = PH'S^(-1)
if( !solver.setA(S) ) throw new RuntimeException("Invert failed");
solver.invert(S_inv);
multTransA(H,S_inv,d);
mult(P,d,K);
// x = x + Ky
mult(K,y,a);
addEquals(x,a);
// P = (I-kH)P = P - (KH)P = P-K(HP)
mult(H,P,c);
mult(K,c,b);
subtractEquals(P, b);
}
@Override
public DenseMatrix64F getState() {
return x;
}
@Override
public DenseMatrix64F getCovariance() {
return P;
}
}
</syntaxhighlight>
== Equations Example ==
<syntaxhighlight lang="java">
/**
* Example of how the equation interface can greatly simplify code
*
* @author Peter Abeles
*/
public class KalmanFilterEquation implements KalmanFilter{
// system state estimate
private DenseMatrix64F x,P;
private Equation eq;
// Storage for precompiled code for predict and update
Sequence predictX,predictP;
Sequence updateY,updateK,updateX,updateP;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
int dimenX = F.numCols;
x = new DenseMatrix64F(dimenX,1);
P = new DenseMatrix64F(dimenX,dimenX);
eq = new Equation();
// Provide aliases between the symbolic variables and matrices we normally interact with
// The names do not have to be the same.
eq.alias(x,"x",P,"P",Q,"Q",F,"F",H,"H");
// Dummy matrix place holder to avoid compiler errors. Will be replaced later on
eq.alias(new DenseMatrix64F(1,1),"z");
eq.alias(new DenseMatrix64F(1,1),"R");
// Pre-compile so that it doesn't have to compile it each time it's invoked. More cumbersome
// but for small matrices the overhead is significant
predictX = eq.compile("x = F*x");
predictP = eq.compile("P = F*P*F' + Q");
updateY = eq.compile("y = z - H*x");
updateK = eq.compile("K = P*H'*inv( H*P*H' + R )");
updateX = eq.compile("x = x + K*y");
updateP = eq.compile("P = P-K*(H*P)");
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x.set(x);
this.P.set(P);
}
@Override
public void predict() {
predictX.perform();
predictP.perform();
}
@Override
public void update(DenseMatrix64F z, DenseMatrix64F R) {
// Alias will overwrite the reference to the previous matrices with the same name
eq.alias(z,"z"); eq.alias(R,"R");
updateY.perform();
updateK.perform();
updateX.perform();
updateP.perform();
}
@Override
public DenseMatrix64F getState() {
return x;
}
@Override
public DenseMatrix64F getCovariance() {
return P;
}
}
</syntaxhighlight>
492b556906bac3cd03e54977e1a85399784aef01
133
124
2015-08-10T05:48:54Z
Peter
1
wikitext
text/x-wiki
Here are three examples that demonstrate how a [http://en.wikipedia.org/wiki/Kalman_filter Kalman filter] can be created using different API's in EJML. Each API has different advantages and disadvantages. High level interfaces tend to be easier to use, but sacrifice efficiency. The intent of this article is to illustrate this trend empirically. Runtime performance of each approach is shown below. To see how complex and readable each approach is check out the source code below.
<center>
{| class="wikitable"
! API !! Execution Time (ms)
|-
| SimpleMatrix || 1875
|-
| Procedural || 1280
|-
| Equations || 1698
|}
</center>
__TOC__
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/KalmanFilterSimple.java KalmanFilterSimple]
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/KalmanFilterOperations.java KalmanFilterProcedural]
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/KalmanFilterEquation.java KalmanFilterEquation]
* <disqus>Discuss this example</disqus>
----
'''NOTE:''' While the Kalman filter code below is fully functional and will work well in most applications, it might not be the best. Other variants seek to improve stability and/or avoid the matrix inversion. It's worth point out that some people say you should never invert the matrix in a Kalman filter. There are applications, such as target tracking, where matrix inversion of the innovation covariance is helpful as a preprocessing step.
== SimpleMatrix Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter implemented using SimpleMatrix. The code tends to be easier to
* read and write, but the performance is degraded due to excessive creation/destruction of
* memory and the use of more generic algorithms. This also demonstrates how code can be
* seamlessly implemented using both SimpleMatrix and DenseMatrix64F. This allows code
* to be quickly prototyped or to be written either by novices or experts.
*
* @author Peter Abeles
*/
public class KalmanFilterSimple implements KalmanFilter{
// kinematics description
private SimpleMatrix F,Q,H;
// sytem state estimate
private SimpleMatrix x,P;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
this.F = new SimpleMatrix(F);
this.Q = new SimpleMatrix(Q);
this.H = new SimpleMatrix(H);
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x = new SimpleMatrix(x);
this.P = new SimpleMatrix(P);
}
@Override
public void predict() {
// x = F x
x = F.mult(x);
// P = F P F' + Q
P = F.mult(P).mult(F.transpose()).plus(Q);
}
@Override
public void update(DenseMatrix64F _z, DenseMatrix64F _R) {
// a fast way to make the matrices usable by SimpleMatrix
SimpleMatrix z = SimpleMatrix.wrap(_z);
SimpleMatrix R = SimpleMatrix.wrap(_R);
// y = z - H x
SimpleMatrix y = z.minus(H.mult(x));
// S = H P H' + R
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
// K = PH'S^(-1)
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
// x = x + Ky
x = x.plus(K.mult(y));
// P = (I-kH)P = P - KHP
P = P.minus(K.mult(H).mult(P));
}
@Override
public DenseMatrix64F getState() {
return x.getMatrix();
}
@Override
public DenseMatrix64F getCovariance() {
return P.getMatrix();
}
}
</syntaxhighlight>
== Procedural Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter that is implemented using the operations API, which is procedural. Much of the excessive
* memory creation/destruction has been reduced from the KalmanFilterSimple. A specialized solver is
* under to invert the SPD matrix.
*
* @author Peter Abeles
*/
public class KalmanFilterOperations implements KalmanFilter{
// kinematics description
private DenseMatrix64F F,Q,H;
// system state estimate
private DenseMatrix64F x,P;
// these are predeclared for efficiency reasons
private DenseMatrix64F a,b;
private DenseMatrix64F y,S,S_inv,c,d;
private DenseMatrix64F K;
private LinearSolver<DenseMatrix64F> solver;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
this.F = F;
this.Q = Q;
this.H = H;
int dimenX = F.numCols;
int dimenZ = H.numRows;
a = new DenseMatrix64F(dimenX,1);
b = new DenseMatrix64F(dimenX,dimenX);
y = new DenseMatrix64F(dimenZ,1);
S = new DenseMatrix64F(dimenZ,dimenZ);
S_inv = new DenseMatrix64F(dimenZ,dimenZ);
c = new DenseMatrix64F(dimenZ,dimenX);
d = new DenseMatrix64F(dimenX,dimenZ);
K = new DenseMatrix64F(dimenX,dimenZ);
x = new DenseMatrix64F(dimenX,1);
P = new DenseMatrix64F(dimenX,dimenX);
// covariance matrices are symmetric positive semi-definite
solver = LinearSolverFactory.symmPosDef(dimenX);
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x.set(x);
this.P.set(P);
}
@Override
public void predict() {
// x = F x
mult(F,x,a);
x.set(a);
// P = F P F' + Q
mult(F,P,b);
multTransB(b,F, P);
addEquals(P,Q);
}
@Override
public void update(DenseMatrix64F z, DenseMatrix64F R) {
// y = z - H x
mult(H,x,y);
subtract(z, y, y);
// S = H P H' + R
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
// K = PH'S^(-1)
if( !solver.setA(S) ) throw new RuntimeException("Invert failed");
solver.invert(S_inv);
multTransA(H,S_inv,d);
mult(P,d,K);
// x = x + Ky
mult(K,y,a);
addEquals(x,a);
// P = (I-kH)P = P - (KH)P = P-K(HP)
mult(H,P,c);
mult(K,c,b);
subtractEquals(P, b);
}
@Override
public DenseMatrix64F getState() {
return x;
}
@Override
public DenseMatrix64F getCovariance() {
return P;
}
}
</syntaxhighlight>
== Equations Example ==
<syntaxhighlight lang="java">
/**
* Example of how the equation interface can greatly simplify code
*
* @author Peter Abeles
*/
public class KalmanFilterEquation implements KalmanFilter{
// system state estimate
private DenseMatrix64F x,P;
private Equation eq;
// Storage for precompiled code for predict and update
Sequence predictX,predictP;
Sequence updateY,updateK,updateX,updateP;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
int dimenX = F.numCols;
x = new DenseMatrix64F(dimenX,1);
P = new DenseMatrix64F(dimenX,dimenX);
eq = new Equation();
// Provide aliases between the symbolic variables and matrices we normally interact with
// The names do not have to be the same.
eq.alias(x,"x",P,"P",Q,"Q",F,"F",H,"H");
// Dummy matrix place holder to avoid compiler errors. Will be replaced later on
eq.alias(new DenseMatrix64F(1,1),"z");
eq.alias(new DenseMatrix64F(1,1),"R");
// Pre-compile so that it doesn't have to compile it each time it's invoked. More cumbersome
// but for small matrices the overhead is significant
predictX = eq.compile("x = F*x");
predictP = eq.compile("P = F*P*F' + Q");
updateY = eq.compile("y = z - H*x");
updateK = eq.compile("K = P*H'*inv( H*P*H' + R )");
updateX = eq.compile("x = x + K*y");
updateP = eq.compile("P = P-K*(H*P)");
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x.set(x);
this.P.set(P);
}
@Override
public void predict() {
predictX.perform();
predictP.perform();
}
@Override
public void update(DenseMatrix64F z, DenseMatrix64F R) {
// Alias will overwrite the reference to the previous matrices with the same name
eq.alias(z,"z"); eq.alias(R,"R");
updateY.perform();
updateK.perform();
updateX.perform();
updateP.perform();
}
@Override
public DenseMatrix64F getState() {
return x;
}
@Override
public DenseMatrix64F getCovariance() {
return P;
}
}
</syntaxhighlight>
e841ca47ecc43af90d34c705cbdb67b91afbdbd2
Example Levenberg-Marquardt
0
12
125
98
2015-08-10T00:58:29Z
Peter
1
wikitext
text/x-wiki
Levenberg-Marquardt is a popular non-linear optimization algorithm. This example demonstrate how a basic implementation of Levenberg-Marquardt can be created using EJML's [[Procedural|procedural]] interface. Unnecessary allocation of new memory is avoided by reshaping matrices. When a matrix is reshaped its width and height is changed but new memory is not declared unless the new shape requires more memory than is available.
The algorithm is provided a function, set of inputs, set of outputs, and an initial estimate of the parameters (this often works with all zeros). It finds the parameters that minimize the difference between the computed output and the observed output. A numerical Jacobian is used to estimate the function's gradient.
'''Note:''' This is a simple straight forward implementation of Levenberg-Marquardt and is not as robust as Minpack's implementation. If you are looking for a robust non-linear least-squares minimization library in Java check out [http://ddogleg.org DDogleg].
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/LevenbergMarquardt.java LevenbergMarquardt.java code]
* <disqus>Discuss this example</disqus>
== Example Code ==
<syntaxhighlight lang="java">
/**
* <p>
* This is a straight forward implementation of the Levenberg-Marquardt (LM) algorithm. LM is used to minimize
* non-linear cost functions:<br>
* <br>
* S(P) = Sum{ i=1:m , [y<sub>i</sub> - f(x<sub>i</sub>,P)]<sup>2</sup>}<br>
* <br>
* where P is the set of parameters being optimized.
* </p>
*
* <p>
* In each iteration the parameters are updated using the following equations:<br>
* <br>
* P<sub>i+1</sub> = (H + λ I)<sup>-1</sup> d <br>
* d = (1/N) Sum{ i=1..N , (f(x<sub>i</sub>;P<sub>i</sub>) - y<sub>i</sub>) * jacobian(:,i) } <br>
* H = (1/N) Sum{ i=1..N , jacobian(:,i) * jacobian(:,i)<sup>T</sup> }
* </p>
* <p>
* Whenever possible the allocation of new memory is avoided. This is accomplished by reshaping matrices.
* A matrix that is reshaped won't grow unless the new shape requires more memory than it has available.
* </p>
* @author Peter Abeles
*/
public class LevenbergMarquardt {
// how much the numerical jacobian calculation perturbs the parameters by.
// In better implementation there are better ways to compute this delta. See Numerical Recipes.
private final static double DELTA = 1e-8;
private double initialLambda;
// the function that is optimized
private Function func;
// the optimized parameters and associated costs
private DenseMatrix64F param;
private double initialCost;
private double finalCost;
// used by matrix operations
private DenseMatrix64F d;
private DenseMatrix64F H;
private DenseMatrix64F negDelta;
private DenseMatrix64F tempParam;
private DenseMatrix64F A;
// variables used by the numerical jacobian algorithm
private DenseMatrix64F temp0;
private DenseMatrix64F temp1;
// used when computing d and H variables
private DenseMatrix64F tempDH;
// Where the numerical Jacobian is stored.
private DenseMatrix64F jacobian;
/**
* Creates a new instance that uses the provided cost function.
*
* @param funcCost Cost function that is being optimized.
*/
public LevenbergMarquardt( Function funcCost )
{
this.initialLambda = 1;
// declare data to some initial small size. It will grow later on as needed.
int maxElements = 1;
int numParam = 1;
this.temp0 = new DenseMatrix64F(maxElements,1);
this.temp1 = new DenseMatrix64F(maxElements,1);
this.tempDH = new DenseMatrix64F(maxElements,1);
this.jacobian = new DenseMatrix64F(numParam,maxElements);
this.func = funcCost;
this.param = new DenseMatrix64F(numParam,1);
this.d = new DenseMatrix64F(numParam,1);
this.H = new DenseMatrix64F(numParam,numParam);
this.negDelta = new DenseMatrix64F(numParam,1);
this.tempParam = new DenseMatrix64F(numParam,1);
this.A = new DenseMatrix64F(numParam,numParam);
}
public double getInitialCost() {
return initialCost;
}
public double getFinalCost() {
return finalCost;
}
public DenseMatrix64F getParameters() {
return param;
}
/**
* Finds the best fit parameters.
*
* @param initParam The initial set of parameters for the function.
* @param X The inputs to the function.
* @param Y The "observed" output of the function
* @return true if it succeeded and false if it did not.
*/
public boolean optimize( DenseMatrix64F initParam ,
DenseMatrix64F X ,
DenseMatrix64F Y )
{
configure(initParam,X,Y);
// save the cost of the initial parameters so that it knows if it improves or not
initialCost = cost(param,X,Y);
// iterate until the difference between the costs is insignificant
// or it iterates too many times
if( !adjustParam(X, Y, initialCost) ) {
finalCost = Double.NaN;
return false;
}
return true;
}
/**
* Iterate until the difference between the costs is insignificant
* or it iterates too many times
*/
private boolean adjustParam(DenseMatrix64F X, DenseMatrix64F Y,
double prevCost) {
// lambda adjusts how big of a step it takes
double lambda = initialLambda;
// the difference between the current and previous cost
double difference = 1000;
for( int iter = 0; iter < 20 || difference < 1e-6 ; iter++ ) {
// compute some variables based on the gradient
computeDandH(param,X,Y);
// try various step sizes and see if any of them improve the
// results over what has already been done
boolean foundBetter = false;
for( int i = 0; i < 5; i++ ) {
computeA(A,H,lambda);
if( !solve(A,d,negDelta) ) {
return false;
}
// compute the candidate parameters
subtract(param, negDelta, tempParam);
double cost = cost(tempParam,X,Y);
if( cost < prevCost ) {
// the candidate parameters produced better results so use it
foundBetter = true;
param.set(tempParam);
difference = prevCost - cost;
prevCost = cost;
lambda /= 10.0;
} else {
lambda *= 10.0;
}
}
// it reached a point where it can't improve so exit
if( !foundBetter )
break;
}
finalCost = prevCost;
return true;
}
/**
* Performs sanity checks on the input data and reshapes internal matrices. By reshaping
* a matrix it will only declare new memory when needed.
*/
protected void configure( DenseMatrix64F initParam , DenseMatrix64F X , DenseMatrix64F Y )
{
if( Y.getNumRows() != X.getNumRows() ) {
throw new IllegalArgumentException("Different vector lengths");
} else if( Y.getNumCols() != 1 || X.getNumCols() != 1 ) {
throw new IllegalArgumentException("Inputs must be a column vector");
}
int numParam = initParam.getNumElements();
int numPoints = Y.getNumRows();
if( param.getNumElements() != initParam.getNumElements() ) {
// reshaping a matrix means that new memory is only declared when needed
this.param.reshape(numParam,1, false);
this.d.reshape(numParam,1, false);
this.H.reshape(numParam,numParam, false);
this.negDelta.reshape(numParam,1, false);
this.tempParam.reshape(numParam,1, false);
this.A.reshape(numParam,numParam, false);
}
param.set(initParam);
// reshaping a matrix means that new memory is only declared when needed
temp0.reshape(numPoints,1, false);
temp1.reshape(numPoints,1, false);
tempDH.reshape(numPoints,1, false);
jacobian.reshape(numParam,numPoints, false);
}
/**
* Computes the d and H parameters. Where d is the average error gradient and
* H is an approximation of the hessian.
*/
private void computeDandH( DenseMatrix64F param , DenseMatrix64F x , DenseMatrix64F y )
{
func.compute(param,x, tempDH);
subtractEquals(tempDH, y);
computeNumericalJacobian(param,x,jacobian);
int numParam = param.getNumElements();
int length = x.getNumElements();
// d = average{ (f(x_i;p) - y_i) * jacobian(:,i) }
for( int i = 0; i < numParam; i++ ) {
double total = 0;
for( int j = 0; j < length; j++ ) {
total += tempDH.get(j,0)*jacobian.get(i,j);
}
d.set(i,0,total/length);
}
// compute the approximation of the hessian
multTransB(jacobian,jacobian,H);
scale(1.0/length,H);
}
/**
* A = H + lambda*I <br>
* <br>
* where I is an identity matrix.
*/
private void computeA( DenseMatrix64F A , DenseMatrix64F H , double lambda )
{
final int numParam = param.getNumElements();
A.set(H);
for( int i = 0; i < numParam; i++ ) {
A.set(i,i, A.get(i,i) + lambda);
}
}
/**
* Computes the "cost" for the parameters given.
*
* cost = (1/N) Sum (f(x;p) - y)^2
*/
private double cost( DenseMatrix64F param , DenseMatrix64F X , DenseMatrix64F Y)
{
func.compute(param,X, temp0);
double error = diffNormF(temp0,Y);
return error*error / (double)X.numRows;
}
/**
* Computes a simple numerical Jacobian.
*
* @param param The set of parameters that the Jacobian is to be computed at.
* @param pt The point around which the Jacobian is to be computed.
* @param deriv Where the jacobian will be stored
*/
protected void computeNumericalJacobian( DenseMatrix64F param ,
DenseMatrix64F pt ,
DenseMatrix64F deriv )
{
double invDelta = 1.0/DELTA;
func.compute(param,pt, temp0);
// compute the jacobian by perturbing the parameters slightly
// then seeing how it effects the results.
for( int i = 0; i < param.numRows; i++ ) {
param.data[i] += DELTA;
func.compute(param,pt, temp1);
// compute the difference between the two parameters and divide by the delta
add(invDelta,temp1,-invDelta,temp0,temp1);
// copy the results into the jacobian matrix
System.arraycopy(temp1.data,0,deriv.data,i*pt.numRows,pt.numRows);
param.data[i] -= DELTA;
}
}
/**
* The function that is being optimized.
*/
public interface Function {
/**
* Computes the output for each value in matrix x given the set of parameters.
*
* @param param The parameter for the function.
* @param x the input points.
* @param y the resulting output.
*/
public void compute( DenseMatrix64F param , DenseMatrix64F x , DenseMatrix64F y );
}
}
</syntaxhighlight>
7b0208b949fac2086683fbcbc1ea7205d225f15c
Example Principal Component Analysis
0
13
126
99
2015-08-10T00:59:18Z
Peter
1
wikitext
text/x-wiki
Principal Component Analysis (PCA) is a popular and simple to implement classification technique, often used in face recognition. The following is an example of how to implement it in EJML using the procedural interface. It is assumed that the reader is already familiar with PCA.
External Resources
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/PrincipalComponentAnalysis.java PrincipalComponentAnalysis.java source code]
* [http://en.wikipedia.org/wiki/Principal_component_analysis General PCA information on Wikipedia]
* <disqus>Discuss this example</disqus>
= Sample Code =
<syntaxhighlight lang="java">
/**
* <p>
* The following is a simple example of how to perform basic principal component analysis in EJML.
* </p>
*
* <p>
* Principal Component Analysis (PCA) is typically used to develop a linear model for a set of data
* (e.g. face images) which can then be used to test for membership. PCA works by converting the
* set of data to a new basis that is a subspace of the original set. The subspace is selected
* to maximize information.
* </p>
* <p>
* PCA is typically derived as an eigenvalue problem. However in this implementation {@link org.ejml.interfaces.decomposition.SingularValueDecomposition SVD}
* is used instead because it will produce a more numerically stable solution. Computation using EVD requires explicitly
* computing the variance of each sample set. The variance is computed by squaring the residual, which can
* cause loss of precision.
* </p>
*
* <p>
* Usage:<br>
* 1) call setup()<br>
* 2) For each sample (e.g. an image ) call addSample()<br>
* 3) After all the samples have been added call computeBasis()<br>
* 4) Call sampleToEigenSpace() , eigenToSampleSpace() , errorMembership() , response()
* </p>
*
* @author Peter Abeles
*/
public class PrincipalComponentAnalysis {
// principal component subspace is stored in the rows
private DenseMatrix64F V_t;
// how many principal components are used
private int numComponents;
// where the data is stored
private DenseMatrix64F A = new DenseMatrix64F(1,1);
private int sampleIndex;
// mean values of each element across all the samples
double mean[];
public PrincipalComponentAnalysis() {
}
/**
* Must be called before any other functions. Declares and sets up internal data structures.
*
* @param numSamples Number of samples that will be processed.
* @param sampleSize Number of elements in each sample.
*/
public void setup( int numSamples , int sampleSize ) {
mean = new double[ sampleSize ];
A.reshape(numSamples,sampleSize,false);
sampleIndex = 0;
numComponents = -1;
}
/**
* Adds a new sample of the raw data to internal data structure for later processing. All the samples
* must be added before computeBasis is called.
*
* @param sampleData Sample from original raw data.
*/
public void addSample( double[] sampleData ) {
if( A.getNumCols() != sampleData.length )
throw new IllegalArgumentException("Unexpected sample size");
if( sampleIndex >= A.getNumRows() )
throw new IllegalArgumentException("Too many samples");
for( int i = 0; i < sampleData.length; i++ ) {
A.set(sampleIndex,i,sampleData[i]);
}
sampleIndex++;
}
/**
* Computes a basis (the principal components) from the most dominant eigenvectors.
*
* @param numComponents Number of vectors it will use to describe the data. Typically much
* smaller than the number of elements in the input vector.
*/
public void computeBasis( int numComponents ) {
if( numComponents > A.getNumCols() )
throw new IllegalArgumentException("More components requested that the data's length.");
if( sampleIndex != A.getNumRows() )
throw new IllegalArgumentException("Not all the data has been added");
if( numComponents > sampleIndex )
throw new IllegalArgumentException("More data needed to compute the desired number of components");
this.numComponents = numComponents;
// compute the mean of all the samples
for( int i = 0; i < A.getNumRows(); i++ ) {
for( int j = 0; j < mean.length; j++ ) {
mean[j] += A.get(i,j);
}
}
for( int j = 0; j < mean.length; j++ ) {
mean[j] /= A.getNumRows();
}
// subtract the mean from the original data
for( int i = 0; i < A.getNumRows(); i++ ) {
for( int j = 0; j < mean.length; j++ ) {
A.set(i,j,A.get(i,j)-mean[j]);
}
}
// Compute SVD and save time by not computing U
SingularValueDecomposition<DenseMatrix64F> svd =
DecompositionFactory.svd(A.numRows, A.numCols, false, true, false);
if( !svd.decompose(A) )
throw new RuntimeException("SVD failed");
V_t = svd.getV(null,true);
DenseMatrix64F W = svd.getW(null);
// Singular values are in an arbitrary order initially
SingularOps.descendingOrder(null,false,W,V_t,true);
// strip off unneeded components and find the basis
V_t.reshape(numComponents,mean.length,true);
}
/**
* Returns a vector from the PCA's basis.
*
* @param which Which component's vector is to be returned.
* @return Vector from the PCA basis.
*/
public double[] getBasisVector( int which ) {
if( which < 0 || which >= numComponents )
throw new IllegalArgumentException("Invalid component");
DenseMatrix64F v = new DenseMatrix64F(1,A.numCols);
CommonOps.extract(V_t,which,which+1,0,A.numCols,v,0,0);
return v.data;
}
/**
* Converts a vector from sample space into eigen space.
*
* @param sampleData Sample space data.
* @return Eigen space projection.
*/
public double[] sampleToEigenSpace( double[] sampleData ) {
if( sampleData.length != A.getNumCols() )
throw new IllegalArgumentException("Unexpected sample length");
DenseMatrix64F mean = DenseMatrix64F.wrap(A.getNumCols(),1,this.mean);
DenseMatrix64F s = new DenseMatrix64F(A.getNumCols(),1,true,sampleData);
DenseMatrix64F r = new DenseMatrix64F(numComponents,1);
CommonOps.subtract(s, mean, s);
CommonOps.mult(V_t,s,r);
return r.data;
}
/**
* Converts a vector from eigen space into sample space.
*
* @param eigenData Eigen space data.
* @return Sample space projection.
*/
public double[] eigenToSampleSpace( double[] eigenData ) {
if( eigenData.length != numComponents )
throw new IllegalArgumentException("Unexpected sample length");
DenseMatrix64F s = new DenseMatrix64F(A.getNumCols(),1);
DenseMatrix64F r = DenseMatrix64F.wrap(numComponents,1,eigenData);
CommonOps.multTransA(V_t,r,s);
DenseMatrix64F mean = DenseMatrix64F.wrap(A.getNumCols(),1,this.mean);
CommonOps.add(s,mean,s);
return s.data;
}
/**
* <p>
* The membership error for a sample. If the error is less than a threshold then
* it can be considered a member. The threshold's value depends on the data set.
* </p>
* <p>
* The error is computed by projecting the sample into eigenspace then projecting
* it back into sample space and
* </p>
*
* @param sampleA The sample whose membership status is being considered.
* @return Its membership error.
*/
public double errorMembership( double[] sampleA ) {
double[] eig = sampleToEigenSpace(sampleA);
double[] reproj = eigenToSampleSpace(eig);
double total = 0;
for( int i = 0; i < reproj.length; i++ ) {
double d = sampleA[i] - reproj[i];
total += d*d;
}
return Math.sqrt(total);
}
/**
* Computes the dot product of each basis vector against the sample. Can be used as a measure
* for membership in the training sample set. High values correspond to a better fit.
*
* @param sample Sample of original data.
* @return Higher value indicates it is more likely to be a member of input dataset.
*/
public double response( double[] sample ) {
if( sample.length != A.numCols )
throw new IllegalArgumentException("Expected input vector to be in sample space");
DenseMatrix64F dots = new DenseMatrix64F(numComponents,1);
DenseMatrix64F s = DenseMatrix64F.wrap(A.numCols,1,sample);
CommonOps.mult(V_t,s,dots);
return NormOps.normF(dots);
}
}
</syntaxhighlight>
d7229cd81da07a2ed2819e8fc6c2590e3760ef3b
Example Polynomial Fitting
0
14
127
100
2015-08-10T00:59:49Z
Peter
1
wikitext
text/x-wiki
In this example it is shown how EJML can be used to fit a polynomial of arbitrary degree to a set of data. The key concepts shown here are; 1) how to create a linear using LinearSolverFactory, 2) use an adjustable linear solver, 3) and effective matrix reshaping. This is all done using the procedural interface.
First a best fit polynomial is fit to a set of data and then a outliers are removed from the observation set and the coefficients recomputed. Outliers are removed efficiently using an adjustable solver that does not resolve the whole system again.
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/PolynomialFit.java PolynomialFit.java source code]
* <disqus>Discuss this example</disqus>
= PolynomialFit Example Code =
<syntaxhighlight lang="java">
/**
* <p>
* This example demonstrates how a polynomial can be fit to a set of data. This is done by
* using a least squares solver that is adjustable. By using an adjustable solver elements
* can be inexpensively removed and the coefficients recomputed. This is much less expensive
* than resolving the whole system from scratch.
* </p>
* <p>
* The following is demonstrated:<br>
* <ol>
* <li>Creating a solver using LinearSolverFactory</li>
* <li>Using an adjustable solver</li>
* <li>reshaping</li>
* </ol>
* @author Peter Abeles
*/
public class PolynomialFit {
// Vandermonde matrix
DenseMatrix64F A;
// matrix containing computed polynomial coefficients
DenseMatrix64F coef;
// observation matrix
DenseMatrix64F y;
// solver used to compute
AdjustableLinearSolver solver;
/**
* Constructor.
*
* @param degree The polynomial's degree which is to be fit to the observations.
*/
public PolynomialFit( int degree ) {
coef = new DenseMatrix64F(degree+1,1);
A = new DenseMatrix64F(1,degree+1);
y = new DenseMatrix64F(1,1);
// create a solver that allows elements to be added or removed efficiently
solver = LinearSolverFactory.adjustable();
}
/**
* Returns the computed coefficients
*
* @return polynomial coefficients that best fit the data.
*/
public double[] getCoef() {
return coef.data;
}
/**
* Computes the best fit set of polynomial coefficients to the provided observations.
*
* @param samplePoints where the observations were sampled.
* @param observations A set of observations.
*/
public void fit( double samplePoints[] , double[] observations ) {
// Create a copy of the observations and put it into a matrix
y.reshape(observations.length,1,false);
System.arraycopy(observations,0, y.data,0,observations.length);
// reshape the matrix to avoid unnecessarily declaring new memory
// save values is set to false since its old values don't matter
A.reshape(y.numRows, coef.numRows,false);
// set up the A matrix
for( int i = 0; i < observations.length; i++ ) {
double obs = 1;
for( int j = 0; j < coef.numRows; j++ ) {
A.set(i,j,obs);
obs *= samplePoints[i];
}
}
// process the A matrix and see if it failed
if( !solver.setA(A) )
throw new RuntimeException("Solver failed");
// solver the the coefficients
solver.solve(y,coef);
}
/**
* Removes the observation that fits the model the worst and recomputes the coefficients.
* This is done efficiently by using an adjustable solver. Often times the elements with
* the largest errors are outliers and not part of the system being modeled. By removing them
* a more accurate set of coefficients can be computed.
*/
public void removeWorstFit() {
// find the observation with the most error
int worstIndex=-1;
double worstError = -1;
for( int i = 0; i < y.numRows; i++ ) {
double predictedObs = 0;
for( int j = 0; j < coef.numRows; j++ ) {
predictedObs += A.get(i,j)*coef.get(j,0);
}
double error = Math.abs(predictedObs- y.get(i,0));
if( error > worstError ) {
worstError = error;
worstIndex = i;
}
}
// nothing left to remove, so just return
if( worstIndex == -1 )
return;
// remove that observation
removeObservation(worstIndex);
// update A
solver.removeRowFromA(worstIndex);
// solve for the parameters again
solver.solve(y,coef);
}
/**
* Removes an element from the observation matrix.
*
* @param index which element is to be removed
*/
private void removeObservation( int index ) {
final int N = y.numRows-1;
final double d[] = y.data;
// shift
for( int i = index; i < N; i++ ) {
d[i] = d[i+1];
}
y.numRows--;
}
}
</syntaxhighlight>
ca32948991a6e1605d5a0554c2918533ea9093ae
Example Polynomial Roots
0
15
128
101
2015-08-10T01:00:25Z
Peter
1
wikitext
text/x-wiki
Eigenvalue decomposition can be used to find the roots in a polynomial by constructing the so called [http://en.wikipedia.org/wiki/Companion_matrix companion matrix]. While faster techniques do exist for root finding, this is one of the most stable and probably the easiest to implement.
Because the companion matrix is not symmetric a generalized eigenvalue [MatrixDecomposition decomposition] is needed. The roots of the polynomial may also be [http://en.wikipedia.org/wiki/Complex_number complex].
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/PolynomialRootFinder.java PolynomialRootFinder.java source code]
* <disqus>Discuss this example</disqus>
= Example Code =
<syntaxhighlight lang="java">
public class PolynomialRootFinder {
/**
* <p>
* Given a set of polynomial coefficients, compute the roots of the polynomial. Depending on
* the polynomial being considered the roots may contain complex number. When complex numbers are
* present they will come in pairs of complex conjugates.
* </p>
*
* <p>
* Coefficients are ordered from least to most significant, e.g: y = c[0] + x*c[1] + x*x*c[2].
* </p>
*
* @param coefficients Coefficients of the polynomial.
* @return The roots of the polynomial
*/
public static Complex64F[] findRoots(double... coefficients) {
int N = coefficients.length-1;
// Construct the companion matrix
DenseMatrix64F c = new DenseMatrix64F(N,N);
double a = coefficients[N];
for( int i = 0; i < N; i++ ) {
c.set(i,N-1,-coefficients[i]/a);
}
for( int i = 1; i < N; i++ ) {
c.set(i,i-1,1);
}
// use generalized eigenvalue decomposition to find the roots
EigenDecomposition<DenseMatrix64F> evd = DecompositionFactory.eig(N,false);
evd.decompose(c);
Complex64F[] roots = new Complex64F[N];
for( int i = 0; i < N; i++ ) {
roots[i] = evd.getEigenvalue(i);
}
return roots;
}
}
</syntaxhighlight>
3614497a6034917ffb2a51696d9a1c634e4c2ca6
Equations
0
18
153
97
2016-01-23T22:04:30Z
Peter
1
wikitext
text/x-wiki
Writing succinct and readable linear algebra code in Java, using any library, is problematic. Originally EJML just offered two API's for performing linear algebra. A procedural API which provided complete control over memory and which algorithms were used, but was verbose and has a sharper learning curve. Alternatively you could use an object oriented API (SimpleMatrix) but lose control over memory and it has a limited set of operators. Neither of these API's produces code which is all that similar to how equations are written mathematically.
Languages, such as Matlab, are specifically designed for processing matrices and are much closer to mathematical notation. C++ offers the ability to overload operators allowing for more natural code, see [http://eigen.tuxfamily.org Eigen]. To overcome this problem EJML now provides the _Equation_ API, which allows a Matlab/Octave like notation to be used.
This is achieved by parsing text strings with equations and converting it into a set of executable instructions, see the usage example below:
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
It is easy to see that the equations code is the compact and readable. While the syntax is heavily inspired by Matlab and its kin, it does not attempt to replicate their functionality. It is also not a replacement for SimpleMatrix or the procedural API. There are situations where those other interfaces are easier to use and most programs would need to use a mix.
Equations is designed to have minimal overhead. It runs almost as fast as the procedural API and can be used such that all memory is predeclared.
----
__TOC__
= Quick Start =
The syntax used in Equation is very similar to Matlab and other computer algebra systems (CAS). It is assumed the reader is already familiar with these systems and can quickly pick up the syntax through these examples.
Let's start with a complete simple example then explain what's going on line by line.
<pre>
01: public void updateP( DenseMatrix64F P , DenseMatrix64F F , DenseMatrix64F Q ) {
02: Equation eq = new Equation();
03: eq.alias(P,"P",F,"F",Q,"Q");
04: eq.process("S = F*P*F'");
05: eq.process("P = S + Q");
06: }
</pre>
'''Line 02:''' Declare the Equation class.<br>
'''Line 03:''' Create aliases for each variable. This allowed Equation to reference and manipulate those classes.<br>
'''Line 04:''' Process() is called and passed in a text string with an equation in it. The variable 'S' is lazily created and set to the result of F*P*F'.<br>
'''Line 05:''' Process() is called again and P is set to the result of adding S and Q together. Because P is aliased to the input matrix P that matrix is changed.
Three types of variables are supported, matrix, double, and integer. Results can be stored in each type and all can be aliased. The example below uses all 3 data types and to compute the likelihood of "x" from a multivariable normal distribution defined by matrices 'mu' and 'P'.
<syntaxhighlight lang="java">
eq.alias(x.numRows,"k",P,"P",x,"x",mu,"mu");
eq.process("p = (2*pi)^(-k/2)/sqrt(det(P))*exp(-0.5*(x-mu)'*inv(P)*(x-mu))");
</syntaxhighlight>
The end result 'p' will be a double. There was no need to alias 'pi' since it's a built in constant. Since 'p' is lazily defined how do you access the result?
<syntaxhighlight lang="java">
double p = eq.lookupDouble("p");
</syntaxhighlight>
For a matrix you could use eq.lookupMatrix() and eq.lookupInteger() for integers. If you don't know the variable's type then eq.lookupVariable() is what you need.
It is also possible to define a matrix inline:
<syntaxhighlight lang="java">
eq.process("P = [10 0 0;0 10 0;0 0 10]");
</syntaxhighlight>
Will assign P to a 3x3 matrix with 10's all along it's diagonal. Other matrices can also be included inside:
<syntaxhighlight lang="java">
eq.process("P = [A ; B]");
</syntaxhighlight>
will concatenate A and B horizontally.
Submatrices are also supported for assignment and reference.
<syntaxhighlight lang="java">
eq.process("P(2:5,0:3) = 10*A(1:4,10:13)");
</syntaxhighlight>
P(2:5,0:3) references the sub-matrix inside of P from rows 2 to 5 (inclusive) and columns 0 to 3 (inclusive).
This concludes the quick start tutorial. The remaining sections will go into more detail on each of the subjects touched upon above.
= The Compiler =
The current compiler is very basic and performs very literal translations of equations into code. For example, "A = 2.5*B*C'" could be executed with a single call to CommonOps.multTransB(). Instead it will transpose C, save the result, then scale B by 2.5, save the result, multiply the results together, save that, and finally copy the result into A. In the future the compiler will become smart enough to recognize such patterns.
Compiling the text string contains requires a bit of overhead but once compiled it can be run with very fast. When dealing with larger matrices the overhead involved is insignificant, but for smaller ones it can have a noticeable impact. This is why the ability to precompile an equation is provided.
Original:
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
Precompiled:
<syntaxhighlight lang="java">
// precompile the equation
Sequence s = eq.compile("K = P*H'*inv( H*P*H' + R )");
// execute the results with out needing to recompile
s.perform();
</syntaxhighlight>
Both are equivalent, but if an equation is invoked multiple times the precompiled version can have a noticable improvement in performance. Using precompiled sequences also means that means that internal arrays are only declared once and allows the user to control when memory is created/destroyed.
To make it clear, precompiling is only recommended when dealing with smaller matrices or when tighter control over memory is required.
When an equation is precompiled you can still change the alias for a variable.
<syntaxhighlight lang="java">
eq.alias(0,"sum",0,"i");
Sequence s = eq.compile("sum = sum + i");
for( int i = 0; i < 10; i++ ) {
eq.alias(i,"i");
s.perform();
}
</syntaxhighlight>
This will sum up the numbers from 0 to 9.
== Debugging ==
There will be times when you pass in an equation and it throws some weird exception or just doesn't do what you expected. To see the tokens and sequence of operations set the second parameter in compile() or peform() to true.
For example:
<syntaxhighlight lang="java">
eq.process("y = z - H*x",true);
</syntaxhighlight>
When application is run it will print out
<syntaxhighlight lang="java">
Parsed tokens:
------------
VarMATRIX
ASSIGN
VarMATRIX
MINUS
VarMATRIX
TIMES
VarMATRIX
Operations:
------------
multiply-mm
subtract-mm
copy-mm
</syntaxhighlight>
= Alias =
To manipulate matrices in equations they need to be aliased. Both DenseMatrix64F and SimpleMatrix can be aliased. A copy of scalar numbers can also be aliased. When a variable is aliased a reference to the data is saved and a name associated with it.
<syntaxhighlight lang="java">
DenseMatrix64F x = new DenseMatrix64F(6,1);
eq.alias(x,"x");
</syntaxhighlight>
Multiple variables can be aliased at the same time too
<syntaxhighlight lang="java">
eq.alias(x,"x",P,"P",h,"Happy");
</syntaxhighlight>
As is shown above the string name for a variable does not have to be the same as Java name of the variable. Here is an example where an integer and double is aliased.
<syntaxhighlight lang="java">
int a = 6;
eq.alias(2.3,"distance",a,"a");
</syntaxhighlight>
After a variable has been aliased you can alias the same name again to change it. Here is an example of just that:
<syntaxhighlight lang="java">
for( int i = 0; i < 10; i++ ) {
eq.alias(i,"i");
// do stuff with i
}
</syntaxhighlight>
If after benchmarking your code and discover that the alias operation is slowing it down (a hashmap lookup is done internally) then you should consider the following faster, but uglier, alternative.
<syntaxhighlight lang="java">
VariableInteger i = eq.lookupVariable("i");
for( i.value = 0; i.value < 10; i.value++ ) {
// do stuff with i
}
</syntaxhighlight>
= Submatrices =
Sub-matrices can be read from and written to. It's easy to reference a sub-matrix inside of any matrix. A few examples are below.
<syntaxhighlight lang="java">
A(1:4,0:5)
</syntaxhighlight>
Here rows 1 to 4 (inclusive) and columns 0 to 5 (inclusive) compose the sub-matrix of A. The notation "a:b" indicates an integer set from 'a' to 'b', where 'a' and 'b' must be integers themselves. To specify every row or column do ":" or all rows and columns past a certain 'a' can be referenced with "a:". Finally, you can reference just a number by typeing it, e.g. "a".
<syntaxhighlight lang="java">
A(3:,3) <-- Rows from 3 to the last row and just column 3
A(:,:) <-- Every element in A
A(1,2) <-- The element in A at row=1,col=2
</syntaxhighlight>
The last example is a special case in that A(1,2) will return a double and not 1x1 matrix. Consider the following:
<syntaxhighlight lang="java">
A(0:2,0:2) = C/B(1,2)
</syntaxhighlight>
The results of dividing the elements of matrix C by the value of B(1,2) is assigned to the submatrix in A.
A named variable can also be used to reference elements as long as it's an integer.
<syntaxhighlight lang="java">
a = A(i,j)
</syntaxhighlight>
= Inline Matrix =
Matrices can be created inline and are defined inside of brackets. The matrix is specified in a row-major format, where a space separates elements in a row and a semi-colon indicates the end of a row.
<syntaxhighlight lang="java">
[5 0 0;0 4.0 0.0 ; 0 0 1]
</syntaxhighlight>
Defines a 3x3 matrix with 5,4,1 for it's diagonal elements. Visually this looks like:
<syntaxhighlight lang="java">
[ 5 0 0 ]
[ 0 4 0 ]
[ 0 0 1 ]
</syntaxhighlight>
An inline matrix can be used to concatenate other matrices together.
<syntaxhighlight lang="java">
[ A ; B ; C ]
</syntaxhighlight>
Will concatenate matrices A, B, and C along their rows. They must have the same number of columns. As you might guess, to concatenate along columns you would
<syntaxhighlight lang="java">
[ A B C ]
</syntaxhighlight>
and each matrix must have the same number of rows. Inner matrices are also allowed
<syntaxhighlight lang="java">
[ [1 2;2 3] [4;5] ; A ]
</syntaxhighlight>
which will result in
<syntaxhighlight lang="java">
[ 1 2 4 ]
[ 2 3 5 ]
[ A ]
</syntaxhighlight>
= Built in Functions and Variables =
'''Constants'''
<pre>
pi = Math.PI
e = Math.E
</pre>
'''Functions'''
<pre>
eye(N) Create an identity matrix which is N by N.
eye(A) Create an identity matrix which is A.numRows by A.numCols
normF(A) Frobenius normal of the matrix.
det(A) Determinant of the matrix
inv(A) Inverse of a matrix
pinv(A) Pseudo-inverse of a matrix
rref(A) Reduced row echelon form of A
trace(A) Trace of the matrix
zeros(r,c) Matrix full of zeros with r rows and c columns.
ones(r,c) Matrix full of ones with r rows and c columns.
diag(A) If a vector then returns a square matrix with diagonal elements filled with vector
diag(A) If a matrix then it returns the diagonal elements as a column vector
dot(A,B) Returns the dot product of two vectors as a double. Does not work on general matrices.
solve(A,B) Returns the solution X from A*X = B.
kron(A,B) Kronecker product
abs(A) Absolute value of A.
max(A) Element with the largest value in A.
min(A) Element with the smallest value in A.
pow(a,b) Scalar power of a to b. Can also be invoked with "a^b".
sin(a) Math.sin(a) for scalars only
cos(a) Math.cos(a) for scalars only
atan(a) Math.atan(a) for scalars only
atan2(a,b) Math.atan2(a,b) for scalars only
exp(a) Math.exp(a) for scalars and element-wise matrices
log(a) Math.log(a) for scalars and element-wise matrices
</pre>
'''Symbols'''
<pre>
'*' multiplication (Matrix-Matrix, Scalar-Matrix, Scalar-Scalar)
'+' addition (Matrix-Matrix, Scalar-Matrix, Scalar-Scalar)
'-' subtraction (Matrix-Matrix, Scalar-Matrix, Scalar-Scalar)
'/' divide (Matrix-Scalar, Scalar-Scalar)
'/' matrix solve "x=b/A" is equivalent to x=solve(A,b) (Matrix-Matrix)
'^' Scalar power. a^b is a to the power of b.
'\' left-divide. Same as divide but reversed. e.g. x=A\b is x=solve(A,b)
'.*' element-wise multiplication (Matrix-Matrix)
'./' element-wise division (Matrix-Matrix)
'.^' element-wise power. (scalar-scalar) (matrix-matrix) (scalar-matrix) (matrix-scalar)
'^' Scalar power. a^b is a to the power of b.
''' matrix transpose
'=' assignment by value (Matrix-Matrix, Scalar-Scalar)
</pre>
Order of operations: [ ' ] precedes [ ^ .^ ] precedes [ * / .* ./ ] precedes [ + - ]
= Specialized Submatrix and Matrix Construction =
<pre>
Extracts a sub-matrix from A with rows 1 to 10 (inclusive) and column 3.
A(1:10,3)
Extracts a sub-matrix from A with rows 2 to numRows-1 (inclusive) and all the columns.
A(2:,:)
Will concat A and B along their columns and then concat the result with C along their rows.
[A,B;C]
Defines a 3x2 matrix.
[1 2; 3 4; 4 5]
You can also perform operations inside:
[[2 3 4]';[4 5 6]']
Will assign B to the sub-matrix in A.
A(1:3,4:8) = B
</pre>
= Integer Number Sequences =
Previous example code has made much use of integer number sequences. There are three different types of integer number sequences 'explicit', 'for', and 'for-range'.
<pre>
1) Explicit:
Example: "1 2 4 0"
Example: "1 2,-7,4" Commas needed to create negative numbers. Otherwise it will be subtraction.
2) for:
Example: "2:10" Sequence of "2 3 4 5 6 7 8 9 10"
Example: "2:2:10" Sequence of "2 4 6 8 10"
3) for-range:
Example: "2:" Sequence of "2 3 ... max"
Example: "2:2:" Sequence of "2 4 ... max"
4) combined:
Example: "1 2 7:10" Sequence of "1 2 7 8 9 10"
</pre>
= User Defined Functions =
It's easy to add your own custom functions too. A custom function implements ManagerFunctions.Input1 or ManagerFunctions.InputN, depending on the number of inputs it takes. It is then added to the ManagerFunctions in Equation by call add(). The output matrix should also be resized.
[[Example Customizing Equations]]
= User Defined Macros =
Macros are used to insert patterns into the code. Consider this example:
<syntaxhighlight lang="java">
eq.process("macro ata( a ) = (a'*a)");
eq.process("b = ata(c)");
</syntaxhighlight>
The first line defines a macro named "ata" with one parameter 'a'. When compiled the equation in the second
line is replaced with "b = (a'*a)". The "(" ")" in the macro isn't strictly necissary in this situation, but
is a good practice. Consider the following.
<syntaxhighlight lang="java">
eq.process("b = ata(c)*r");
</syntaxhighlight>
Will become "b = (a'*a)*r" but with out () it will be "b = a'*a*r" which is not the same thing!
<p><b>NOTE:</b>In the future macros might be replaced with functions. Macros are harder for the user to debug, but
functions are harder for EJML's developer to implement.</p>
6a5de79ae898b06333e6cffc159c3c306faec8ec
Matrix Decompositions
0
26
157
75
2016-11-09T19:03:15Z
Peter
1
wikitext
text/x-wiki
#summary How to perform common matrix decompositions in EJML
= Introduction =
Matrix decomposition are used to reduce a matrix to a more simplic format which can be easily solved and used to extract characteristics from. Below is a list of matrix decompositions and data structures there are implementations for.
{| class="wikitable"
! Decomposition !! DenseMatrix64F !! BlockMatrix64F !! CDenseMatrix64F
|-
| LU || Yes || || Yes
|-
| Cholesky L`*`L<sup>T</sup> and R<sup>T</sup>`*`R || Yes || Yes || Yes
|-
| Cholesky L`*`D`*`L<sup>T</sup> || Yes || ||
|-
| QR || Yes || Yes || Yes
|-
| QR Column Pivot || Yes || ||
|-
| Singular Value Decomposition (SVD) || Yes || ||
|-
| Generalized Eigen Value || Yes || ||
|-
| Symmetric Eigen Value || Yes || Yes ||
|-
| Bidiagonal || Yes || ||
|-
| Tridiagonal || Yes || Yes || Yes
|-
| Hessenberg || Yes || || Yes
|}
= Solving Using Matrix Decompositions =
Decompositions such as LU and QR are used to solve a linear system. A common mistake in EJML is to directly decompose the matrix instead of using a LinearSolver. LinearSolvers simplify the process of solving a linear system, are very fast, and will automatically be updated as new algorithms are added. It is recommended that you use them whenever possible.
For more information on LinearSolvers see the wikipage at [[Solving Linear Systems]].
= SimpleMatrix =
SimpleMatrix has easy to an use interface built in for SVD and EVD:
<syntaxhighlight lang="java">
SimpleSVD svd = A.svd();
SimpleEVD evd = A.eig();
SimpleMatrix U = svd.getU();
</syntaxhighlight>
where A is a SimpleMatrix.
As with most operators in SimpleMatrix, it is possible to chain a decompositions with other commands. For instance, to print the singular values in a matrix:
<syntaxhighlight lang="java">
A.svd().getW().extractDiag().transpose().print();
</syntaxhighlight>
Other decompositions can be performed by using accessing the internal DenseMatrix64F and using the decompositions shown in the following section below. The following is an example of applying a Cholesky decomposition.
<syntaxhighlight lang="java">
CholeskyDecomposition<DenseMatrix64F> chol = DecompositionFactory.chol(A.numRows(),true);
if( !chol.decompose(A.getMatrix()))
throw new RuntimeException("Cholesky failed!");
SimpleMatrix L = SimpleMatrix.wrap(chol.getT(null));
</syntaxhighlight>
= DecompositionFactory =
The best way to create a matrix decomposition is by using DecompositionFactory. Directly instantiating a decomposition is discouraged because of the added complexity. DecompositionFactory is updated as new and faster algorithms are added.
<syntaxhighlight lang="java">
public interface DecompositionInterface<T extends Matrix64F> {
/**
* Computes the decomposition of the input matrix. Depending on the implementation
* the input matrix might be stored internally or modified. If it is modified then
* the function {@link #inputModified()} will return true and the matrix should not be
* modified until the decomposition is no longer needed.
*
* @param orig The matrix which is being decomposed. Modification is implementation dependent.
* @return Returns if it was able to decompose the matrix.
*/
public boolean decompose( T orig );
/**
* Is the input matrix to {@link #decompose(org.ejml.data.DenseMatrix64F)} is modified during
* the decomposition process.
*
* @return true if the input matrix to decompose() is modified.
*/
public boolean inputModified();
}
</syntaxhighlight>
Most decompositions in EJML implement DecompositionInterface. To decompose "A" matrix simply call decompose(A). It returns true if there are no error while decomposing and false otherwise. While in general you can trust the results if true is returned some algorithms can have faults that are not reported. This is true for all linear algebra libraries.
To minimize memory usage, most decompositions will modify the original matrix passed into decompose(). Call inputModified() to determine if the input matrix is modified or not. If it is modified, and you do not wish it to be modified, just pass in a copy of the original instead.
Below is an example of how to compute the SVD of a matrix:
<syntaxhighlight lang="java">
void decompositionExample( DenseMatrix64F A ) {
SingularValueDecomposition<DenseMatrix64F> svd = DecompositionFactory.svd(A.numRows,A.numCols);
if( !svd.decompose(A) )
throw new RuntimeException("Decomposition failed");
DenseMatrix64F U = svd.getU(null,false);
DenseMatrix64F W = svd.getW(null);
DenseMatrix64F V = svd.getV(null,false);
}
</syntaxhighlight>
Note how it checks the returned value from decompose.
In addition DecompositionFactory provides functions for computing the quality of a decomposition. Being able measure the decomposition's quality is an important way to validate its correctness. It works by "reconstructing" the original matrix then computing the difference between the reconstruction and the original. Smaller the quality is the better the decomposition is. With an ideal value of being around 1e-15 in most cases.
<syntaxhighlight lang="java">
if( DecompositionFactory.quality(A,svd) > 1e-3 )
throw new RuntimeException("Bad decomposition");
</syntaxhighlight>
List of functions in DecompositionFactory
{| class="wikitable"
! Decomposition !! Code
|-
| LU || DecompositionFactory.lu()
|-
| QR || DecompositionFactory.qr()
|-
| QRP || DecompositionFactory.qrp()
|-
| Cholesky || DecompositionFactory.chol()
|-
| Cholesky LDL || DecompositionFactory.cholLDL()
|-
| SVD || DecompositionFactory.svd()
|-
| Eigen || DecompositionFactory.eig()
|}
= Helper Functions for SVD and Eigen =
Two classes SingularOps and EigenOps have been provided for extracting useful information from these decompositions or to provide highly specialized ways of computing the decompositions. Below is a list of more common uses of these functions:
SingularOps
*descendingOrder()
**In EJML the ordering of the returned singular values is not in general guaranteed. This function will reorder the U,W,V matrices such that the singular values are in the standard largest to smallest ordering.
*nullSpace()
**Computes the null space from the provided decomposition.
*rank()
**Returns the matrix's rank.
*nullity()
**Returns the matrix's nullity.
EigenOps
*computeEigenValue()
**Given an eigen vector compute its eigenvalue.
*computeEigenVector()
**Given an eigenvalue compute its eigenvector.
*boundLargestEigenValue()
**Returns a lower and upper bound for the largest eigenvalue.
*createMatrixD() and createMatrixV()
**Reformats the results such that two matrices (D and V) contain the eigenvalues and eigenvectors respectively. This is similar to the format used by other libraries such as Jama.
9e1af75355a8e677222eeec85fcf9849e3e4733b
User:Spam
2
59
169
2017-01-09T06:35:16Z
Peter
1
Created page with "test"
wikitext
text/x-wiki
test
a94a8fe5ccb19ba61c4c0873d391e987982fbbd3
Frequently Asked Questions
0
4
208
207
2017-05-18T04:33:05Z
Peter
1
/* What version of Java? */
wikitext
text/x-wiki
#summary Frequently Asked Questions
= Frequently Asked Questions=
Here is a list of frequently asked questions about EJML. Most of these questions have been asked and answered several times already.
== Why does EJML crash when I try to process a very large matrix? ==
If you are working with large matrices first do a quick sanity check. Ask yourself, how much memory is that matrix using and can my computer physically store it? Compute the number of required gigabytes with the following equation:
memory in gigabytes = (columns * rows*8)/(1024*1024*1024)
Now take the number and multiply it by 3 or 4 to take in account overhead/working memory and that's about how much memory your system will need to do anything useful. This is true for ALL dense linear algebra libraries. EJML is also limited by the size of a Java array, which can have at most 2^32 elements. If you are lucky the system is sparse (mostly zeros) and there problem might actually be feasible using other libraries, see below.
The other potentially fatal problem is that very large matrices are very slow to process. So even if you have enough RAM on your computer the time to compute the solution could well exceed the lifetime of a typical human.
== Will EJML work on Android? ==
Yes EJML has been used for quite some time on Android. The library does include a tinny bit of swing code, which will not cause any problems as long as you do not call anything related to visualization. In Android Studio simply reference the latest jar on the Maven central repository. See [Download] for how to do that.
== Multi-Threaded ==
Currently EJML is entirely single threaded. The plan is to max out single threaded performance by finishing block algorithms implementations, then declare the library to be at version 1.0. After that has happened, start work on multi-threaded implementations. However, there is no schedule in place for when all this will happen.
The main driving factor for when major new features are added is when I personally need such a feature. I'm starting to work on larger scale machine learning problems, so there might be a need soon. Another way to speed up the process is to volunteer your time and help develop it.
== Sparse Matrix Support ==
EJML's is in the early stages of adding sparse matrix support. Currently only basic operations are supported and no decompositions. In the mean time the following libraries do provide some support for sparse matrices. Note: I have not used any of these libraries personally with sparse matrices.
* [https://sites.google.com/site/piotrwendykier/software/csparsej CSparseJ]
* [http://la4j.org/ la4j]
* [https://github.com/fommil/matrix-toolkits-java MTJ]
== How do I do cross product? ==
Cross product and other geometric operations are outside of the scope of EJML. EJML is focused on linear algebra and does not aim to mirror tools such as Matlab.
== What version of Java? ==
EJML can be compiled with Java 1.7 and beyond. With a few minor modifications to the source code you can get it to compile with 1.5.
25f4864624ca26d6d79c5b563404626f29b15dcd
Procedural
0
28
209
92
2017-05-18T04:44:37Z
Peter
1
wikitext
text/x-wiki
The procedural interface in EJML provides access to all of its capabilities and provides much more control over which algorithms are used and when memory is created. The downside to this increased control is the added difficulty in programming, kinda resembles writing in assembly. Code can be made very efficient, but managing all the temporary data structures can be tedious.
The procedural API processes DenseMatrix matrix types. A complete list of these data types is listed [[#DenseMatrix Types|below]]. These classes themselves only provide very basic operators for accessing elements within a matrix and well as its size and shape. The complete set of functions for manipulating DenseMatrix are available in various Ops classes, described below.
Internally all dense matrix classes store the matrix in a single array using a row-major format. Fixed sized matrices and vectors unroll the matrix, where each element is a matrix parameter. This can allow for much faster access and array overhead. However if fixed sized matrices get too large then performance starts to drop due to what I suppose is CPU caching issues.
While it has a sharper learning curve and takes more time to learn it is the most powerful API.
* [[Manual#Example Code|List of code examples]]
= DenseMatrix Types =
{| style="wikitable"
! Name !! Description
|-
| {{DataDocLink|DMatrixRMaj}} || Dense Double Real Matrix
|-
| {{DataDocLink|FMatrixRMaj}} || Dense Float Real Matrix
|-
| {{DataDocLink|ZDMatrixRMaj}} || Dense Double Complex Matrix
|-
| {{DataDocLink|CDMatrixRMaj}} || Dense Float Complex Matrix
|-
| {{DocLink|org/ejml/data/FixedMatrix3x3_64F.html|FixedMatrixNxN_F64F}} || Fixed Size Dense Real Matrix
|-
| {{DocLink|org/ejml/data/FixedMatrix3_64F.html|FixedMatrixN_F64F}} || Fixed Size Dense Real Vector
|}
= Accessors =
* get( row , col )
* set( row , col , value )
** Returns or sets the value of an element at the specified row and column.
* unsafe_get( row , col )
* unsafe_set( row , col , value )
** Faster version of get() or set() that does not perform bounds checking.
* get( index )
* set( index )
** Returns or sets the value of an element at the specified index. Useful for vectors and element-wise operations.
* iterator( boolean rowMajor, int minRow, int minCol, int maxRow, int maxCol )
** An iterator that iterates through the sub-matrix by row or by column.
= Operations =
Several "Ops" classes provide functions for manipulating *MatrixRMaj and most are contained inside of the org.ejml.dense.row package. The list below is provided for real matrices. Other matrix can be found by changing the suffix.
{| class="wikitable"
! Suffix || Matrix Type
|-
| DDRM || Dense Double Real
|-
| FDRM || Dense Float Real
|-
| ZDRM || Dense Double Complex
|-
| CDRM || Dense Float Complex
|}
; {{OpsDocLink|CommonOps_DDRM}} : Provides the most common matrix operations.
; {{OpsDocLink|EigenOps_DDRM}} : Provides operations related to eigenvalues and eigenvectors.
; {{OpsDocLink|MatrixFeatures_DDRM}} : Used to compute various features related to a matrix.
; {{OpsDocLink|NormOps_DDRM}} : Operations for computing different matrix norms.
; {{OpsDocLink|SingularOps_DDRM}} : Operations related to singular value decompositions.
; {{OpsDocLink|SpecializedOps_DDRM}} : Grab bag for operations which do not fit in anywhere else.
; {{OpsDocLink|RandomMatrices_DDRM}} : Used to create different types of random matrices.
For fixed sized matrices FixedOpsN is provided, where N = 2 to 6. FixedOpsN is similar in functionality to CommonOps.
e36801b9f3c98caa87092330ec17a9d1ce26bf55
211
209
2017-05-18T04:52:27Z
Peter
1
wikitext
text/x-wiki
The procedural interface in EJML provides access to all of its capabilities and provides much more control over which algorithms are used and when memory is created. The downside to this increased control is the added difficulty in programming, kinda resembles writing in assembly. Code can be made very efficient, but managing all the temporary data structures can be tedious.
The procedural API processes DenseMatrix matrix types. A complete list of these data types is listed [[#DenseMatrix Types|below]]. These classes themselves only provide very basic operators for accessing elements within a matrix and well as its size and shape. The complete set of functions for manipulating DenseMatrix are available in various Ops classes, described below.
Internally all dense matrix classes store the matrix in a single array using a row-major format. Fixed sized matrices and vectors unroll the matrix, where each element is a matrix parameter. This can allow for much faster access and array overhead. However if fixed sized matrices get too large then performance starts to drop due to what I suppose is CPU caching issues.
While it has a sharper learning curve and takes more time to learn it is the most powerful API.
* [[Manual#Example Code|List of code examples]]
= DenseMatrix Types =
{| style="wikitable"
! Name !! Description
|-
| {{DataDocLink|DMatrixRMaj}} || Dense Double Real Matrix
|-
| {{DataDocLink|FMatrixRMaj}} || Dense Float Real Matrix
|-
| {{DataDocLink|ZDMatrixRMaj}} || Dense Double Complex Matrix
|-
| {{DataDocLink|CDMatrixRMaj}} || Dense Float Complex Matrix
|-
| {{DocLink|org/ejml/data/DMatrix3x3.html|DMatrixNxN}} || Fixed Size Dense Real Matrix
|-
| {{DocLink|org/ejml/data/DMatrix3.html|DMatrixN}} || Fixed Size Dense Real Vector
|}
= Accessors =
* get( row , col )
* set( row , col , value )
** Returns or sets the value of an element at the specified row and column.
* unsafe_get( row , col )
* unsafe_set( row , col , value )
** Faster version of get() or set() that does not perform bounds checking.
* get( index )
* set( index )
** Returns or sets the value of an element at the specified index. Useful for vectors and element-wise operations.
* iterator( boolean rowMajor, int minRow, int minCol, int maxRow, int maxCol )
** An iterator that iterates through the sub-matrix by row or by column.
= Operations =
Several "Ops" classes provide functions for manipulating *MatrixRMaj and most are contained inside of the org.ejml.dense.row package. The list below is provided for real matrices. Other matrix can be found by changing the suffix.
{| class="wikitable"
! Suffix || Matrix Type
|-
| DDRM || Dense Double Real
|-
| FDRM || Dense Float Real
|-
| ZDRM || Dense Double Complex
|-
| CDRM || Dense Float Complex
|}
; {{OpsDocLink|CommonOps_DDRM}} : Provides the most common matrix operations.
; {{OpsDocLink|EigenOps_DDRM}} : Provides operations related to eigenvalues and eigenvectors.
; {{OpsDocLink|MatrixFeatures_DDRM}} : Used to compute various features related to a matrix.
; {{OpsDocLink|NormOps_DDRM}} : Operations for computing different matrix norms.
; {{OpsDocLink|SingularOps_DDRM}} : Operations related to singular value decompositions.
; {{OpsDocLink|SpecializedOps_DDRM}} : Grab bag for operations which do not fit in anywhere else.
; {{OpsDocLink|RandomMatrices_DDRM}} : Used to create different types of random matrices.
For fixed sized matrices FixedOpsN is provided, where N = 2 to 6. FixedOpsN is similar in functionality to CommonOps.
a4c62d2db99cb96ad1c998133915b0dfe2749ca2
220
211
2017-05-18T14:11:25Z
Peter
1
wikitext
text/x-wiki
The procedural interface in EJML provides access to all of its capabilities and provides much more control over which algorithms are used and when memory is created. The downside to this increased control is the added difficulty in programming, kinda resembles writing in assembly. Code can be made very efficient, but managing all the temporary data structures can be tedious.
The procedural supports all matrix types in EJML and follows a consistent naming pattern across all matrix types. Ops classes end in a suffix that indicate which type of matrix they can process. From the matrix name you can determine the type of element (float,double,real,complex) and it's internal data structure, e.g. row-major or block. In general, almost everyone will want to interact with row major matrices. Conversion to block format is done automatically internally when it becomes advantageous.
''NOTE: In previous versions of EJML the matrix DMatrixRMaj was known as DenseMatrix64F.''
{| style="wikitable"
! Matrix Name !! Description
|-
| {{DataDocLink|DMatrixRMaj}} || Dense Double Real Matrix - Row Major
|-
| {{DataDocLink|FMatrixRMaj}} || Dense Float Real Matrix - Row Major
|-
| {{DataDocLink|ZDMatrixRMaj}} || Dense Double Complex Matrix - Row Major
|-
| {{DataDocLink|CDMatrixRMaj}} || Dense Float Complex Matrix - Row Major
|-
| {{DocLink|org/ejml/data/DMatrix3x3.html|DMatrixNxN}} || Fixed Size Dense Real Matrix
|-
| {{DocLink|org/ejml/data/DMatrix3.html|DMatrixN}} || Fixed Size Dense Real Vector
|}
The list of ops suffixes is listed below and the related matrix type. Through out the manual we will default to DMatrixRMaj unless there is a specific need to do otherwise.
{| class="wikitable"
! Ops Suffix || Matrix Type
|-
| DDRM || DMatrixRMaj
|-
| FDRM || FMatrixRMaj
|-
| ZDRM || ZMatrixRMaj
|-
| CDRM || CMatrixRMaj
|}
* [[Manual#Example Code|List of code examples]]
= DenseMatrix Types =
= Accessors =
* get( row , col )
* set( row , col , value )
** Returns or sets the value of an element at the specified row and column.
* unsafe_get( row , col )
* unsafe_set( row , col , value )
** Faster version of get() or set() that does not perform bounds checking.
* get( index )
* set( index )
** Returns or sets the value of an element at the specified index. Useful for vectors and element-wise operations.
* iterator( boolean rowMajor, int minRow, int minCol, int maxRow, int maxCol )
** An iterator that iterates through the sub-matrix by row or by column.
= Operations =
Several "Ops" classes provide functions for manipulating different types of matrices and most are contained inside of the org.ejml.dense.* package, where * is replaced with the matrix structure package type, e.g. row for row-major. The list below is provided for DMatrixRMaj, other matrix can be found by changing the suffix as discussed above.
; {{OpsDocLink|CommonOps_DDRM}} : Provides the most common matrix operations.
; {{OpsDocLink|EigenOps_DDRM}} : Provides operations related to eigenvalues and eigenvectors.
; {{OpsDocLink|MatrixFeatures_DDRM}} : Used to compute various features related to a matrix.
; {{OpsDocLink|NormOps_DDRM}} : Operations for computing different matrix norms.
; {{OpsDocLink|SingularOps_DDRM}} : Operations related to singular value decompositions.
; {{OpsDocLink|SpecializedOps_DDRM}} : Grab bag for operations which do not fit in anywhere else.
; {{OpsDocLink|RandomMatrices_DDRM}} : Used to create different types of random matrices.
For fixed sized matrices FixedOpsN is provided, where N = 2 to 6. FixedOpsN is similar in functionality to CommonOps.
011548fa6238b023805e926a8086d46aaf2de5c7
222
220
2017-05-18T14:26:29Z
Peter
1
wikitext
text/x-wiki
The procedural interface in EJML provides access to all of its capabilities and provides much more control over which algorithms are used and when memory is created. The downside to this increased control is the added difficulty in programming, kinda resembles writing in assembly. Code can be made very efficient, but managing all the temporary data structures can be tedious.
The procedural supports all matrix types in EJML and follows a consistent naming pattern across all matrix types. Ops classes end in a suffix that indicate which type of matrix they can process. From the matrix name you can determine the type of element (float,double,real,complex) and it's internal data structure, e.g. row-major or block. In general, almost everyone will want to interact with row major matrices. Conversion to block format is done automatically internally when it becomes advantageous.
{| class="wikitable"
! Matrix Name !! Description !! Suffix
|-
| {{DataDocLink|DMatrixRMaj}} || Dense Double Real - Row Major || DDRM
|-
| {{DataDocLink|FMatrixRMaj}} || Dense Float Real - Row Major || FDRM
|-
| {{DataDocLink|ZDMatrixRMaj}} || Dense Double Complex - Row Major || ZDRM
|-
| {{DataDocLink|CDMatrixRMaj}} || Dense Float Complex - Row Major || CDRM
|-
| {{DataDocLink|DMatrixSparseCSC}} || Sparse Double Real - Compressed Column || DSCC
|-
| {{DataDocLink|DMatrixSparseTriplet}} || Sparse Double Real - Triplet || DSTL
|-
| {{DocLink|org/ejml/data/DMatrix3x3.html|DMatrix3x3}} || Dense Double Real 3x3 || DDF3
|-
| {{DocLink|org/ejml/data/DMatrix3.html|DMatrix3}} || Dense Double Real 3 || DDF3
|-
| {{DocLink|org/ejml/data/FMatrix3x3.html|FMatrix3x3}} || Dense Float Real 3x3 || FDF3
|-
| {{DocLink|org/ejml/data/FMatrix3.html|FMatrix3}} || Dense Float Real 3 || FDF3
|}
Fixed sized matrix from 2 to 7 are supported. Just replaced the 3 with the desired size. ''NOTE: In previous versions of EJML the matrix DMatrixRMaj was known as DenseMatrix64F.''
= Matrix Element Accessors =
* get( row , col )
* set( row , col , value )
** Returns or sets the value of an element at the specified row and column.
* unsafe_get( row , col )
* unsafe_set( row , col , value )
** Faster version of get() or set() that does not perform bounds checking.
* get( index )
* set( index )
** Returns or sets the value of an element at the specified index. Useful for vectors and element-wise operations.
* iterator( boolean rowMajor, int minRow, int minCol, int maxRow, int maxCol )
** An iterator that iterates through the sub-matrix by row or by column.
= Operations Classes =
Several "Ops" classes provide functions for manipulating different types of matrices and most are contained inside of the org.ejml.dense.* package, where * is replaced with the matrix structure package type, e.g. row for row-major. The list below is provided for DMatrixRMaj, other matrix can be found by changing the suffix as discussed above.
; {{OpsDocLink|CommonOps_DDRM}} : Provides the most common matrix operations.
; {{OpsDocLink|EigenOps_DDRM}} : Provides operations related to eigenvalues and eigenvectors.
; {{OpsDocLink|MatrixFeatures_DDRM}} : Used to compute various features related to a matrix.
; {{OpsDocLink|NormOps_DDRM}} : Operations for computing different matrix norms.
; {{OpsDocLink|SingularOps_DDRM}} : Operations related to singular value decompositions.
; {{OpsDocLink|SpecializedOps_DDRM}} : Grab bag for operations which do not fit in anywhere else.
; {{OpsDocLink|RandomMatrices_DDRM}} : Used to create different types of random matrices.
For fixed sized matrices FixedOpsN is provided, where N = 2 to 6. FixedOpsN is similar in functionality to CommonOps.
10da8946867537ba353337db38dfc6bd29658559
223
222
2017-05-18T14:26:54Z
Peter
1
wikitext
text/x-wiki
The procedural interface in EJML provides access to all of its capabilities and provides much more control over which algorithms are used and when memory is created. The downside to this increased control is the added difficulty in programming, kinda resembles writing in assembly. Code can be made very efficient, but managing all the temporary data structures can be tedious.
The procedural supports all matrix types in EJML and follows a consistent naming pattern across all matrix types. Ops classes end in a suffix that indicate which type of matrix they can process. From the matrix name you can determine the type of element (float,double,real,complex) and it's internal data structure, e.g. row-major or block. In general, almost everyone will want to interact with row major matrices. Conversion to block format is done automatically internally when it becomes advantageous.
{| class="wikitable"
! Matrix Name !! Description !! Suffix
|-
| {{DataDocLink|DMatrixRMaj}} || Dense Double Real - Row Major || DDRM
|-
| {{DataDocLink|FMatrixRMaj}} || Dense Float Real - Row Major || FDRM
|-
| {{DataDocLink|ZDMatrixRMaj}} || Dense Double Complex - Row Major || ZDRM
|-
| {{DataDocLink|CDMatrixRMaj}} || Dense Float Complex - Row Major || CDRM
|-
| {{DataDocLink|DMatrixSparseCSC}} || Sparse Double Real - Compressed Column || DSCC
|-
| {{DataDocLink|DMatrixSparseTriplet}} || Sparse Double Real - Triplet || DSTL
|-
| {{DocLink|org/ejml/data/DMatrix3x3.html|DMatrix3x3}} || Dense Double Real 3x3 || DDF3
|-
| {{DocLink|org/ejml/data/DMatrix3.html|DMatrix3}} || Dense Double Real 3 || DDF3
|-
| {{DocLink|org/ejml/data/FMatrix3x3.html|FMatrix3x3}} || Dense Float Real 3x3 || FDF3
|-
| {{DocLink|org/ejml/data/FMatrix3.html|FMatrix3}} || Dense Float Real 3 || FDF3
|}
Fixed sized matrix from 2 to 6 are supported. Just replaced the 3 with the desired size. ''NOTE: In previous versions of EJML the matrix DMatrixRMaj was known as DenseMatrix64F.''
= Matrix Element Accessors =
* get( row , col )
* set( row , col , value )
** Returns or sets the value of an element at the specified row and column.
* unsafe_get( row , col )
* unsafe_set( row , col , value )
** Faster version of get() or set() that does not perform bounds checking.
* get( index )
* set( index )
** Returns or sets the value of an element at the specified index. Useful for vectors and element-wise operations.
* iterator( boolean rowMajor, int minRow, int minCol, int maxRow, int maxCol )
** An iterator that iterates through the sub-matrix by row or by column.
= Operations Classes =
Several "Ops" classes provide functions for manipulating different types of matrices and most are contained inside of the org.ejml.dense.* package, where * is replaced with the matrix structure package type, e.g. row for row-major. The list below is provided for DMatrixRMaj, other matrix can be found by changing the suffix as discussed above.
; {{OpsDocLink|CommonOps_DDRM}} : Provides the most common matrix operations.
; {{OpsDocLink|EigenOps_DDRM}} : Provides operations related to eigenvalues and eigenvectors.
; {{OpsDocLink|MatrixFeatures_DDRM}} : Used to compute various features related to a matrix.
; {{OpsDocLink|NormOps_DDRM}} : Operations for computing different matrix norms.
; {{OpsDocLink|SingularOps_DDRM}} : Operations related to singular value decompositions.
; {{OpsDocLink|SpecializedOps_DDRM}} : Grab bag for operations which do not fit in anywhere else.
; {{OpsDocLink|RandomMatrices_DDRM}} : Used to create different types of random matrices.
For fixed sized matrices FixedOpsN is provided, where N = 2 to 6. FixedOpsN is similar in functionality to CommonOps.
377a323a1012ced92dc5ce1aa136c6e34bc7a3cb
Template:OpsDocLink
10
32
210
87
2017-05-18T04:50:45Z
Peter
1
wikitext
text/x-wiki
{{DocLink|org/ejml/dense/row/{{{1}}}.html|{{{1}}} }}
473a507bfbad0083c8f4b4b5d84e20df92895e48
SimpleMatrix
0
30
212
94
2017-05-18T04:55:14Z
Peter
1
wikitext
text/x-wiki
SimpleMatrix is an interface that provides an easy to use object oriented way of doing linear algebra. It is a wrapper around the procedural interface in EJML and was originally inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. When using SimpleMatrix, memory management is automatically handled and it allows commands to be chained together using a flow paradigm. Switching between SimpleMatrix and the [[Procedural]] interface is easy, enabling the two programming paradigms to be mixed in the same code base.
When invoking a function in SimpleMatrix none of the input matrices, including the 'this' matrix, are modified during the function call. There is a slight performance hit when using SimpleMatrix and less control over memory management. See [[Performance]] for a comparison of runtime performance of the different interfaces.
Below is a brief overview of SimpleMatrix concepts.
== Chaining Operations ==
When using SimpleMatrix operations can be chained together. Chained operations are often easier to read and write.
<syntaxhighlight lang="java">
public SimpleMatrix process( SimpleMatrix A , SimpleMatrix B ) {
return A.transpose().mult(B).scale(12).invert();
}
</syntaxhighlight>
is equivalent to the following Matlab code:
<syntaxhighlight lang="java">C = inv((A' * B)*12.0)</syntaxhighlight>
== Working with DMatrixRMaj ==
To convert a [[DMatrixRMaj DMatrixRMaj]] into a SimpleMatrix call the wrap() function. Then to get access to the internal DMatrixRMaj inside of a SimpleMatrix call getMatrix().
<syntaxhighlight lang="java">
public DMatrixRMaj compute( DMatrixRMaj A , DMatrixRMaj B ) {
SimpleMatrix A_ = SimpleMatrix.wrap(A);
SimpleMatrix B_ = SimpleMatrix.wrap(B);
return (DMatrixRMaj)A_.mult(B_).getMatrix();
}
</syntaxhighlight>
A {{DataDocLink|DMatrixRMaj}} can also be passed into the SimpleMatrix constructor, but this will copy the input matrix. Unlike with when wrap is used, changed to the new SimpleMatrix will not modify the original DMatrixRMaj.
== Accessors ==
*get( row , col )
*set( row , col , value )
** Returns or sets the value of an element at the specified row and column.
*get( index )
*set( index )
** Returns or sets the value of an element at the specified index. Useful for vectors and element-wise operations.
*iterator( boolean rowMajor, int minRow, int minCol, int maxRow, int maxCol )
** An iterator that iterates through the sub-matrix by row or by column.
== Submatrices ==
A submatrix is a matrix whose elements are a subset of another matrix. Several different functions are provided for manipulating submatrices.
; extractMatrix : Extracts a rectangular submatrix from the original matrix.
; extractDiag : Creates a column vector containing just the diagonal elements of the matrix.
; extractVector : Extracts either an entire row or column.
; insertIntoThis : Inserts the passed in matrix into 'this' matrix.
; combine : Creates a now matrix that is a combination of the two inputs.
== Decompositions ==
Simplified ways to use popular matrix decompositions is provided. These decompositions provide fewer choices than the equivalent for DMatrixRMaj, but should meet most people needs.
; svd : Computes the singular value decomposition of 'this' matrix
; eig : Computes the eigen value decomposition of 'this' matrix
Direct access to other decompositions (e.g. QR and Cholesky) is not provided in SimpleMatrix because solve() and inv() is provided instead. In more advanced applications use the operator interface instead to compute those decompositions.
== Solve and Invert ==
; solve : Computes the solution to the set of linear equations
; inv : Computes the inverse of a square matrix
; pinv : Computes the pseudo-inverse for an arbitrary matrix
See [[Solving Linear Systems]] for more details on solving systems of equations.
== Other Functions ==
SimpleMatrix provides many other functions. For a complete list see the JavaDoc for {{DocLink|org/ejml/simple/SimpleBase.html|SimpleBase}} and {{DocLink|org/ejml/simple/SimpleMatrix.html|SimpleMatrix}}. Note that SimpleMatrix extends SimpleBase.
== Adding Functionality ==
You can turn SimpleMatrix into your own data structure and extend its capabilities. See the [[Example_Customizing_SimpleMatrix|example on customizing SimpleMatrix]] for the details.
b839441b337c1b3acca22d591da9c7bbd62662f1
213
212
2017-05-18T04:56:26Z
Peter
1
/* Working with DMatrixRMaj */
wikitext
text/x-wiki
SimpleMatrix is an interface that provides an easy to use object oriented way of doing linear algebra. It is a wrapper around the procedural interface in EJML and was originally inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. When using SimpleMatrix, memory management is automatically handled and it allows commands to be chained together using a flow paradigm. Switching between SimpleMatrix and the [[Procedural]] interface is easy, enabling the two programming paradigms to be mixed in the same code base.
When invoking a function in SimpleMatrix none of the input matrices, including the 'this' matrix, are modified during the function call. There is a slight performance hit when using SimpleMatrix and less control over memory management. See [[Performance]] for a comparison of runtime performance of the different interfaces.
Below is a brief overview of SimpleMatrix concepts.
== Chaining Operations ==
When using SimpleMatrix operations can be chained together. Chained operations are often easier to read and write.
<syntaxhighlight lang="java">
public SimpleMatrix process( SimpleMatrix A , SimpleMatrix B ) {
return A.transpose().mult(B).scale(12).invert();
}
</syntaxhighlight>
is equivalent to the following Matlab code:
<syntaxhighlight lang="java">C = inv((A' * B)*12.0)</syntaxhighlight>
== Working with DMatrixRMaj ==
To convert a {{DataDocLink|DMatrixRMaj}} into a SimpleMatrix call the wrap() function. Then to get access to the internal DMatrixRMaj inside of a SimpleMatrix call getMatrix().
<syntaxhighlight lang="java">
public DMatrixRMaj compute( DMatrixRMaj A , DMatrixRMaj B ) {
SimpleMatrix A_ = SimpleMatrix.wrap(A);
SimpleMatrix B_ = SimpleMatrix.wrap(B);
return (DMatrixRMaj)A_.mult(B_).getMatrix();
}
</syntaxhighlight>
A {{DataDocLink|DMatrixRMaj}} can also be passed into the SimpleMatrix constructor, but this will copy the input matrix. Unlike with when wrap is used, changed to the new SimpleMatrix will not modify the original DMatrixRMaj.
== Accessors ==
*get( row , col )
*set( row , col , value )
** Returns or sets the value of an element at the specified row and column.
*get( index )
*set( index )
** Returns or sets the value of an element at the specified index. Useful for vectors and element-wise operations.
*iterator( boolean rowMajor, int minRow, int minCol, int maxRow, int maxCol )
** An iterator that iterates through the sub-matrix by row or by column.
== Submatrices ==
A submatrix is a matrix whose elements are a subset of another matrix. Several different functions are provided for manipulating submatrices.
; extractMatrix : Extracts a rectangular submatrix from the original matrix.
; extractDiag : Creates a column vector containing just the diagonal elements of the matrix.
; extractVector : Extracts either an entire row or column.
; insertIntoThis : Inserts the passed in matrix into 'this' matrix.
; combine : Creates a now matrix that is a combination of the two inputs.
== Decompositions ==
Simplified ways to use popular matrix decompositions is provided. These decompositions provide fewer choices than the equivalent for DMatrixRMaj, but should meet most people needs.
; svd : Computes the singular value decomposition of 'this' matrix
; eig : Computes the eigen value decomposition of 'this' matrix
Direct access to other decompositions (e.g. QR and Cholesky) is not provided in SimpleMatrix because solve() and inv() is provided instead. In more advanced applications use the operator interface instead to compute those decompositions.
== Solve and Invert ==
; solve : Computes the solution to the set of linear equations
; inv : Computes the inverse of a square matrix
; pinv : Computes the pseudo-inverse for an arbitrary matrix
See [[Solving Linear Systems]] for more details on solving systems of equations.
== Other Functions ==
SimpleMatrix provides many other functions. For a complete list see the JavaDoc for {{DocLink|org/ejml/simple/SimpleBase.html|SimpleBase}} and {{DocLink|org/ejml/simple/SimpleMatrix.html|SimpleMatrix}}. Note that SimpleMatrix extends SimpleBase.
== Adding Functionality ==
You can turn SimpleMatrix into your own data structure and extend its capabilities. See the [[Example_Customizing_SimpleMatrix|example on customizing SimpleMatrix]] for the details.
0a6e40be1f5bdcd5f02814627ac906daf8aeddbb
Equations
0
18
214
153
2017-05-18T04:58:52Z
Peter
1
wikitext
text/x-wiki
Writing succinct and readable linear algebra code in Java, using any library, is problematic. Originally EJML just offered two API's for performing linear algebra. A procedural API which provided complete control over memory and which algorithms were used, but was verbose and has a sharper learning curve. Alternatively you could use an object oriented API (SimpleMatrix) but lose control over memory and it has a limited set of operators. Neither of these API's produces code which is all that similar to how equations are written mathematically.
Languages, such as Matlab, are specifically designed for processing matrices and are much closer to mathematical notation. C++ offers the ability to overload operators allowing for more natural code, see [http://eigen.tuxfamily.org Eigen]. To overcome this problem EJML now provides the _Equation_ API, which allows a Matlab/Octave like notation to be used.
This is achieved by parsing text strings with equations and converting it into a set of executable instructions, see the usage example below:
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
It is easy to see that the equations code is the compact and readable. While the syntax is heavily inspired by Matlab and its kin, it does not attempt to replicate their functionality. It is also not a replacement for SimpleMatrix or the procedural API. There are situations where those other interfaces are easier to use and most programs would need to use a mix.
Equations is designed to have minimal overhead. It runs almost as fast as the procedural API and can be used such that all memory is predeclared.
----
__TOC__
= Quick Start =
The syntax used in Equation is very similar to Matlab and other computer algebra systems (CAS). It is assumed the reader is already familiar with these systems and can quickly pick up the syntax through these examples.
Let's start with a complete simple example then explain what's going on line by line.
<pre>
01: public void updateP( DMatrixRMaj P , DMatrixRMaj F , DMatrixRMaj Q ) {
02: Equation eq = new Equation();
03: eq.alias(P,"P",F,"F",Q,"Q");
04: eq.process("S = F*P*F'");
05: eq.process("P = S + Q");
06: }
</pre>
'''Line 02:''' Declare the Equation class.<br>
'''Line 03:''' Create aliases for each variable. This allowed Equation to reference and manipulate those classes.<br>
'''Line 04:''' Process() is called and passed in a text string with an equation in it. The variable 'S' is lazily created and set to the result of F*P*F'.<br>
'''Line 05:''' Process() is called again and P is set to the result of adding S and Q together. Because P is aliased to the input matrix P that matrix is changed.
Three types of variables are supported, matrix, double, and integer. Results can be stored in each type and all can be aliased. The example below uses all 3 data types and to compute the likelihood of "x" from a multivariable normal distribution defined by matrices 'mu' and 'P'.
<syntaxhighlight lang="java">
eq.alias(x.numRows,"k",P,"P",x,"x",mu,"mu");
eq.process("p = (2*pi)^(-k/2)/sqrt(det(P))*exp(-0.5*(x-mu)'*inv(P)*(x-mu))");
</syntaxhighlight>
The end result 'p' will be a double. There was no need to alias 'pi' since it's a built in constant. Since 'p' is lazily defined how do you access the result?
<syntaxhighlight lang="java">
double p = eq.lookupDouble("p");
</syntaxhighlight>
For a matrix you could use eq.lookupMatrix() and eq.lookupInteger() for integers. If you don't know the variable's type then eq.lookupVariable() is what you need.
It is also possible to define a matrix inline:
<syntaxhighlight lang="java">
eq.process("P = [10 0 0;0 10 0;0 0 10]");
</syntaxhighlight>
Will assign P to a 3x3 matrix with 10's all along it's diagonal. Other matrices can also be included inside:
<syntaxhighlight lang="java">
eq.process("P = [A ; B]");
</syntaxhighlight>
will concatenate A and B horizontally.
Submatrices are also supported for assignment and reference.
<syntaxhighlight lang="java">
eq.process("P(2:5,0:3) = 10*A(1:4,10:13)");
</syntaxhighlight>
P(2:5,0:3) references the sub-matrix inside of P from rows 2 to 5 (inclusive) and columns 0 to 3 (inclusive).
This concludes the quick start tutorial. The remaining sections will go into more detail on each of the subjects touched upon above.
= The Compiler =
The current compiler is very basic and performs very literal translations of equations into code. For example, "A = 2.5*B*C'" could be executed with a single call to CommonOps.multTransB(). Instead it will transpose C, save the result, then scale B by 2.5, save the result, multiply the results together, save that, and finally copy the result into A. In the future the compiler will become smart enough to recognize such patterns.
Compiling the text string contains requires a bit of overhead but once compiled it can be run with very fast. When dealing with larger matrices the overhead involved is insignificant, but for smaller ones it can have a noticeable impact. This is why the ability to precompile an equation is provided.
Original:
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
Precompiled:
<syntaxhighlight lang="java">
// precompile the equation
Sequence s = eq.compile("K = P*H'*inv( H*P*H' + R )");
// execute the results with out needing to recompile
s.perform();
</syntaxhighlight>
Both are equivalent, but if an equation is invoked multiple times the precompiled version can have a noticable improvement in performance. Using precompiled sequences also means that means that internal arrays are only declared once and allows the user to control when memory is created/destroyed.
To make it clear, precompiling is only recommended when dealing with smaller matrices or when tighter control over memory is required.
When an equation is precompiled you can still change the alias for a variable.
<syntaxhighlight lang="java">
eq.alias(0,"sum",0,"i");
Sequence s = eq.compile("sum = sum + i");
for( int i = 0; i < 10; i++ ) {
eq.alias(i,"i");
s.perform();
}
</syntaxhighlight>
This will sum up the numbers from 0 to 9.
== Debugging ==
There will be times when you pass in an equation and it throws some weird exception or just doesn't do what you expected. To see the tokens and sequence of operations set the second parameter in compile() or peform() to true.
For example:
<syntaxhighlight lang="java">
eq.process("y = z - H*x",true);
</syntaxhighlight>
When application is run it will print out
<syntaxhighlight lang="java">
Parsed tokens:
------------
VarMATRIX
ASSIGN
VarMATRIX
MINUS
VarMATRIX
TIMES
VarMATRIX
Operations:
------------
multiply-mm
subtract-mm
copy-mm
</syntaxhighlight>
= Alias =
To manipulate matrices in equations they need to be aliased. Both DMatrixRMaj and SimpleMatrix can be aliased. A copy of scalar numbers can also be aliased. When a variable is aliased a reference to the data is saved and a name associated with it.
<syntaxhighlight lang="java">
DMatrixRMaj x = new DMatrixRMaj(6,1);
eq.alias(x,"x");
</syntaxhighlight>
Multiple variables can be aliased at the same time too
<syntaxhighlight lang="java">
eq.alias(x,"x",P,"P",h,"Happy");
</syntaxhighlight>
As is shown above the string name for a variable does not have to be the same as Java name of the variable. Here is an example where an integer and double is aliased.
<syntaxhighlight lang="java">
int a = 6;
eq.alias(2.3,"distance",a,"a");
</syntaxhighlight>
After a variable has been aliased you can alias the same name again to change it. Here is an example of just that:
<syntaxhighlight lang="java">
for( int i = 0; i < 10; i++ ) {
eq.alias(i,"i");
// do stuff with i
}
</syntaxhighlight>
If after benchmarking your code and discover that the alias operation is slowing it down (a hashmap lookup is done internally) then you should consider the following faster, but uglier, alternative.
<syntaxhighlight lang="java">
VariableInteger i = eq.lookupVariable("i");
for( i.value = 0; i.value < 10; i.value++ ) {
// do stuff with i
}
</syntaxhighlight>
= Submatrices =
Sub-matrices can be read from and written to. It's easy to reference a sub-matrix inside of any matrix. A few examples are below.
<syntaxhighlight lang="java">
A(1:4,0:5)
</syntaxhighlight>
Here rows 1 to 4 (inclusive) and columns 0 to 5 (inclusive) compose the sub-matrix of A. The notation "a:b" indicates an integer set from 'a' to 'b', where 'a' and 'b' must be integers themselves. To specify every row or column do ":" or all rows and columns past a certain 'a' can be referenced with "a:". Finally, you can reference just a number by typeing it, e.g. "a".
<syntaxhighlight lang="java">
A(3:,3) <-- Rows from 3 to the last row and just column 3
A(:,:) <-- Every element in A
A(1,2) <-- The element in A at row=1,col=2
</syntaxhighlight>
The last example is a special case in that A(1,2) will return a double and not 1x1 matrix. Consider the following:
<syntaxhighlight lang="java">
A(0:2,0:2) = C/B(1,2)
</syntaxhighlight>
The results of dividing the elements of matrix C by the value of B(1,2) is assigned to the submatrix in A.
A named variable can also be used to reference elements as long as it's an integer.
<syntaxhighlight lang="java">
a = A(i,j)
</syntaxhighlight>
= Inline Matrix =
Matrices can be created inline and are defined inside of brackets. The matrix is specified in a row-major format, where a space separates elements in a row and a semi-colon indicates the end of a row.
<syntaxhighlight lang="java">
[5 0 0;0 4.0 0.0 ; 0 0 1]
</syntaxhighlight>
Defines a 3x3 matrix with 5,4,1 for it's diagonal elements. Visually this looks like:
<syntaxhighlight lang="java">
[ 5 0 0 ]
[ 0 4 0 ]
[ 0 0 1 ]
</syntaxhighlight>
An inline matrix can be used to concatenate other matrices together.
<syntaxhighlight lang="java">
[ A ; B ; C ]
</syntaxhighlight>
Will concatenate matrices A, B, and C along their rows. They must have the same number of columns. As you might guess, to concatenate along columns you would
<syntaxhighlight lang="java">
[ A B C ]
</syntaxhighlight>
and each matrix must have the same number of rows. Inner matrices are also allowed
<syntaxhighlight lang="java">
[ [1 2;2 3] [4;5] ; A ]
</syntaxhighlight>
which will result in
<syntaxhighlight lang="java">
[ 1 2 4 ]
[ 2 3 5 ]
[ A ]
</syntaxhighlight>
= Built in Functions and Variables =
'''Constants'''
<pre>
pi = Math.PI
e = Math.E
</pre>
'''Functions'''
<pre>
eye(N) Create an identity matrix which is N by N.
eye(A) Create an identity matrix which is A.numRows by A.numCols
normF(A) Frobenius normal of the matrix.
det(A) Determinant of the matrix
inv(A) Inverse of a matrix
pinv(A) Pseudo-inverse of a matrix
rref(A) Reduced row echelon form of A
trace(A) Trace of the matrix
zeros(r,c) Matrix full of zeros with r rows and c columns.
ones(r,c) Matrix full of ones with r rows and c columns.
diag(A) If a vector then returns a square matrix with diagonal elements filled with vector
diag(A) If a matrix then it returns the diagonal elements as a column vector
dot(A,B) Returns the dot product of two vectors as a double. Does not work on general matrices.
solve(A,B) Returns the solution X from A*X = B.
kron(A,B) Kronecker product
abs(A) Absolute value of A.
max(A) Element with the largest value in A.
min(A) Element with the smallest value in A.
pow(a,b) Scalar power of a to b. Can also be invoked with "a^b".
sin(a) Math.sin(a) for scalars only
cos(a) Math.cos(a) for scalars only
atan(a) Math.atan(a) for scalars only
atan2(a,b) Math.atan2(a,b) for scalars only
exp(a) Math.exp(a) for scalars and element-wise matrices
log(a) Math.log(a) for scalars and element-wise matrices
</pre>
'''Symbols'''
<pre>
'*' multiplication (Matrix-Matrix, Scalar-Matrix, Scalar-Scalar)
'+' addition (Matrix-Matrix, Scalar-Matrix, Scalar-Scalar)
'-' subtraction (Matrix-Matrix, Scalar-Matrix, Scalar-Scalar)
'/' divide (Matrix-Scalar, Scalar-Scalar)
'/' matrix solve "x=b/A" is equivalent to x=solve(A,b) (Matrix-Matrix)
'^' Scalar power. a^b is a to the power of b.
'\' left-divide. Same as divide but reversed. e.g. x=A\b is x=solve(A,b)
'.*' element-wise multiplication (Matrix-Matrix)
'./' element-wise division (Matrix-Matrix)
'.^' element-wise power. (scalar-scalar) (matrix-matrix) (scalar-matrix) (matrix-scalar)
'^' Scalar power. a^b is a to the power of b.
''' matrix transpose
'=' assignment by value (Matrix-Matrix, Scalar-Scalar)
</pre>
Order of operations: [ ' ] precedes [ ^ .^ ] precedes [ * / .* ./ ] precedes [ + - ]
= Specialized Submatrix and Matrix Construction =
<pre>
Extracts a sub-matrix from A with rows 1 to 10 (inclusive) and column 3.
A(1:10,3)
Extracts a sub-matrix from A with rows 2 to numRows-1 (inclusive) and all the columns.
A(2:,:)
Will concat A and B along their columns and then concat the result with C along their rows.
[A,B;C]
Defines a 3x2 matrix.
[1 2; 3 4; 4 5]
You can also perform operations inside:
[[2 3 4]';[4 5 6]']
Will assign B to the sub-matrix in A.
A(1:3,4:8) = B
</pre>
= Integer Number Sequences =
Previous example code has made much use of integer number sequences. There are three different types of integer number sequences 'explicit', 'for', and 'for-range'.
<pre>
1) Explicit:
Example: "1 2 4 0"
Example: "1 2,-7,4" Commas needed to create negative numbers. Otherwise it will be subtraction.
2) for:
Example: "2:10" Sequence of "2 3 4 5 6 7 8 9 10"
Example: "2:2:10" Sequence of "2 4 6 8 10"
3) for-range:
Example: "2:" Sequence of "2 3 ... max"
Example: "2:2:" Sequence of "2 4 ... max"
4) combined:
Example: "1 2 7:10" Sequence of "1 2 7 8 9 10"
</pre>
= User Defined Functions =
It's easy to add your own custom functions too. A custom function implements ManagerFunctions.Input1 or ManagerFunctions.InputN, depending on the number of inputs it takes. It is then added to the ManagerFunctions in Equation by call add(). The output matrix should also be resized.
[[Example Customizing Equations]]
= User Defined Macros =
Macros are used to insert patterns into the code. Consider this example:
<syntaxhighlight lang="java">
eq.process("macro ata( a ) = (a'*a)");
eq.process("b = ata(c)");
</syntaxhighlight>
The first line defines a macro named "ata" with one parameter 'a'. When compiled the equation in the second
line is replaced with "b = (a'*a)". The "(" ")" in the macro isn't strictly necissary in this situation, but
is a good practice. Consider the following.
<syntaxhighlight lang="java">
eq.process("b = ata(c)*r");
</syntaxhighlight>
Will become "b = (a'*a)*r" but with out () it will be "b = a'*a*r" which is not the same thing!
<p><b>NOTE:</b>In the future macros might be replaced with functions. Macros are harder for the user to debug, but
functions are harder for EJML's developer to implement.</p>
2b9fb12c1b8acb4c49a714b80a8fe8bd2f5ca88e
Matlab to EJML
0
9
215
114
2017-05-18T05:01:18Z
Peter
1
wikitext
text/x-wiki
To help Matlab users quickly learn how to use EJML a list of equivalent functions is provided in the sections below. Keep in mind that directly porting Matlab code will often result in inefficient code. In Matlab for loops are very expensive and often extracting sub-matrices is the preferred method. Java like C++ can handle for loops much better and extracting and inserting a matrix can be much less efficient than direct manipulation of the matrix itself.
= Equations =
If you're a Matlab user you seriously might want to consider using the [[Equations]] interface in EJML. It is similar to Matlab and can be mixed with the other interfaces.
<syntaxHighlight lang="java">
eq.process("[A(5:10,:) , ones(5,5)] .* normF(B) \ C")
</syntaxHighlight>
That equation would be horrendous to implement using SimpleMatrix or the operations interface. Take a look at the [[Equations|Equations tutorial]] to learn more.
= SimpleMatrix =
A subset of EJML's functionality is provided in [[SimpleMatrix]]. If SimpleMatrix does not provide the functionality you desire then look at the list of [[#Procedural]] functions below.
{| class="wikitable" align="center" style="font-size:120%; text-align:left; border-collapse:collapse;"
|-
! Matlab !! SimpleMatrix
|-
| eye(3) || SimpleMatrix.identity(3)
|-
| diag([1 2 3]) || SimpleMatrix.diag(1,2,3)
|-
| C(1,2) = 5 || A.set(0,1,5)
|-
| C(:) = A || C.set(A)
|-
| C(:) = 5 || C.set(5)
|-
| C(2,:) = [1,2,3] || C.setRow(1,0,1,2,3)
|-
| C(:,2) = [1,2,3] || C.setColumn(1,0,1,2,3)
|-
| C = A(2:4,3:8) || C = A.extractMatrix(1,4,2,8)
|-
| A(:,2:end) = B || A.insertIntoThis(0,1,B);
|-
| C = diag(A) || C = A.extractDiag()
|-
| C = [A,B] || C = A.combine(0,A.numCols(),B)
|-
| C = A' || C = A.transpose()
|-
| C = -A || C = A.negative()
|-
| C = A*B || C = A.mult(B)
|-
| C = A + B || C = A.plus(B)
|-
| C = A - B || C = A.minus(B)
|-
| C = 2*A || C = A.scale(2)
|-
| C = A / 2 || C = A.divide(2)
|-
| C = inv(A) || C = A.invert()
|-
| C = pinv(A) || C = A.pinv()
|-
| C = A \ B || C = A.solve(B)
|-
| C = trace(A) || C = A.trace()
|-
| det(A) || A.det()
|-
| C=kron(A,B) || C=A.kron(B)
|-
| norm(A,"fro") || A.normf()
|-
| max(abs(A(:))) || A.elementMaxAbs()
|-
| sum(A(:)) || A.elementSum()
|-
| rank(A) || A.svd(true).rank()
|-
| [U,S,V] = svd(A) || A.svd(false)
|-
| [U,S,V] = svd(A,0) || A.svd(true)
|-
| [V,L] = eig(A) || A.eig()
|}
= Procedural =
Functions and classes in the procedural interface use DMatrixRMaj as input. Since SimpleMatrix is a wrapper around DMatrixRMaj its internal matrix can be extracted and passed into any of these functions.
{| class="wikitable" align="center" style="font-size:120%; text-align:left; border-collapse:collapse;"
|-
! Matlab !! Procedural
|-
| eye(3) || CommonOps_DDRM.identity(3)
|-
| C(1,2) = 5 || A.set(0,1,5)
|-
| C(:) = A || C.setTo(A)
|-
| C(2,:) = [1,2,3] || CommonOps_DDRM.insert(new DMatrixRMaj(1,3,true,1,2,3),C,1,0)
|-
| C(:,2) = [1,2,3] || CommonOps_DDRM.insert(new DMatrixRMaj(3,1,true,1,2,3),C,0,1)
|-
| C = A(2:4,3:8) || CommonOps_DDRM.extract(A,1,4,2,8)
|-
| diag([1 2 3]) || CommonOps_DDRM.diag(1,2,3)
|-
| C = A' || CommonOps_DDRM.transpose(A,C)
|-
| A = A' || CommonOps_DDRM.transpose(A)
|-
| A = -A || CommonOps_DDRM.changeSign(A)
|-
| C = A * B || CommonOps_DDRM.mult(A,B,C)
|-
| C = A .* B || CommonOps_DDRM.elementMult(A,B,C)
|-
| A = A .* B || CommonOps_DDRM.elementMult(A,B)
|-
| C = A ./ B || CommonOps_DDRM.elementDiv(A,B,C)
|-
| A = A ./ B || CommonOps_DDRM.elementDiv(A,B)
|-
| C = A + B || CommonOps_DDRM.add(A,B,C)
|-
| C = A - B || CommonOps_DDRM.sub(A,B,C)
|-
| C = 2 * A || CommonOps_DDRM.scale(2,A,C)
|-
| A = 2 * A || CommonOps_DDRM.scale(2,A)
|-
| C = A / 2 || CommonOps_DDRM.divide(2,A,C)
|-
| A = A / 2 || CommonOps_DDRM.divide(2,A)
|-
| C = inv(A) || CommonOps_DDRM.invert(A,C)
|-
| A = inv(A) || CommonOps_DDRM.invert(A)
|-
| C = pinv(A) || CommonOps_DDRM.pinv(A)
|-
| C = trace(A) || C = CommonOps_DDRM.trace(A)
|-
| C = det(A) || C = CommonOps_DDRM.det(A)
|-
| C=kron(A,B) || CommonOps_DDRM.kron(A,B,C)
|-
| B=rref(A) || B = CommonOps_DDRM.rref(A,-1,null)
|-
| norm(A,"fro") || NormOps.normf(A)
|-
| norm(A,1) || NormOps.normP1(A)
|-
| norm(A,2) || NormOps.normP2(A)
|-
| norm(A,Inf) || NormOps.normPInf(A)
|-
| max(abs(A(:))) || CommonOps_DDRM.elementMaxAbs(A)
|-
| sum(A(:)) || CommonOps_DDRM.elementSum(A)
|-
| rank(A,tol) || svd.decompose(A); SingularOps.rank(svd,tol)
|-
| [U,S,V] = svd(A) || DecompositionFactory.svd(A.numRows,A.numCols,true,true,false)
|-
| || SingularOps.descendingOrder(U,false,S,V,false)
|-
| [U,S,V] = svd(A,0) || DecompositionFactory.svd(A.numRows,A.numCols,true,true,true)
|-
| || SingularOps.descendingOrder(U,false,S,V,false)
|-
| S = svd(A) || DecompositionFactory.svd(A.numRows,A.numCols,false,false,true)
|-
| [V,D] = eig(A) || eig = DecompositionFactory.eig(A.numCols); eig.decompose(A)
|-
| || V = EigenOps.createMatrixV(eig); D = EigenOps.createMatrixD(eig)
|-
| [Q,R] = qr(A) || decomp = DecompositionFactory.qr(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,false); R = decomp.getR(null,false)
|-
| [Q,R] = qr(A,0) || decomp = DecompositionFactory.qr(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,true); R = decomp.getR(null,true)
|-
| [Q,R,P] = qr(A) || decomp = DecompositionFactory.qrp(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,false); R = decomp.getR(null,false)
|-
| || P = decomp.getPivotMatrix(null)
|-
| [Q,R,P] = qr(A,0) || decomp = DecompositionFactory.qrp(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,true); R = decomp.getR(null,true)
|-
| || P = decomp.getPivotMatrix(null)
|-
| R = chol(A) || DecompositionFactory.chol(A.numCols,false)
|-
| [L,U,P] = lu(A) ||DecompositionFactory.lu(A.numCols)
|}
b0228fda2cc8922b2d46ae8491da61020b2f160c
216
215
2017-05-18T05:03:58Z
Peter
1
wikitext
text/x-wiki
To help Matlab users quickly learn how to use EJML a list of equivalent functions is provided in the sections below. Keep in mind that directly porting Matlab code will often result in inefficient code. In Matlab for loops are very expensive and often extracting sub-matrices is the preferred method. Java like C++ can handle for loops much better and extracting and inserting a matrix can be much less efficient than direct manipulation of the matrix itself.
= Equations =
If you're a Matlab user you seriously might want to consider using the [[Equations]] interface in EJML. It is similar to Matlab and can be mixed with the other interfaces.
<syntaxHighlight lang="java">
eq.process("[A(5:10,:) , ones(5,5)] .* normF(B) \ C")
</syntaxHighlight>
That equation would be horrendous to implement using SimpleMatrix or the operations interface. Take a look at the [[Equations|Equations tutorial]] to learn more.
= SimpleMatrix =
A subset of EJML's functionality is provided in [[SimpleMatrix]]. If SimpleMatrix does not provide the functionality you desire then look at the list of [[#Procedural]] functions below.
{| class="wikitable" align="center" style="font-size:120%; text-align:left; border-collapse:collapse;"
|-
! Matlab !! SimpleMatrix
|-
| eye(3) || SimpleMatrix.identity(3)
|-
| diag([1 2 3]) || SimpleMatrix.diag(1,2,3)
|-
| C(1,2) = 5 || A.set(0,1,5)
|-
| C(:) = A || C.set(A)
|-
| C(:) = 5 || C.set(5)
|-
| C(2,:) = [1,2,3] || C.setRow(1,0,1,2,3)
|-
| C(:,2) = [1,2,3] || C.setColumn(1,0,1,2,3)
|-
| C = A(2:4,3:8) || C = A.extractMatrix(1,4,2,8)
|-
| A(:,2:end) = B || A.insertIntoThis(0,1,B);
|-
| C = diag(A) || C = A.extractDiag()
|-
| C = [A,B] || C = A.combine(0,A.numCols(),B)
|-
| C = A' || C = A.transpose()
|-
| C = -A || C = A.negative()
|-
| C = A*B || C = A.mult(B)
|-
| C = A + B || C = A.plus(B)
|-
| C = A - B || C = A.minus(B)
|-
| C = 2*A || C = A.scale(2)
|-
| C = A / 2 || C = A.divide(2)
|-
| C = inv(A) || C = A.invert()
|-
| C = pinv(A) || C = A.pinv()
|-
| C = A \ B || C = A.solve(B)
|-
| C = trace(A) || C = A.trace()
|-
| det(A) || A.det()
|-
| C=kron(A,B) || C=A.kron(B)
|-
| norm(A,"fro") || A.normf()
|-
| max(abs(A(:))) || A.elementMaxAbs()
|-
| sum(A(:)) || A.elementSum()
|-
| rank(A) || A.svd(true).rank()
|-
| [U,S,V] = svd(A) || A.svd(false)
|-
| [U,S,V] = svd(A,0) || A.svd(true)
|-
| [V,L] = eig(A) || A.eig()
|}
= Procedural =
Functions and classes in the procedural interface use DMatrixRMaj as input. Since SimpleMatrix is a wrapper around DMatrixRMaj its internal matrix can be extracted and passed into any of these functions.
{| class="wikitable" align="center" style="font-size:120%; text-align:left; border-collapse:collapse;"
|-
! Matlab !! Procedural
|-
| eye(3) || CommonOps_DDRM.identity(3)
|-
| C(1,2) = 5 || A.set(0,1,5)
|-
| C(:) = A || C.setTo(A)
|-
| C(2,:) = [1,2,3] || CommonOps_DDRM.insert(new DMatrixRMaj(1,3,true,1,2,3),C,1,0)
|-
| C(:,2) = [1,2,3] || CommonOps_DDRM.insert(new DMatrixRMaj(3,1,true,1,2,3),C,0,1)
|-
| C = A(2:4,3:8) || CommonOps_DDRM.extract(A,1,4,2,8)
|-
| diag([1 2 3]) || CommonOps_DDRM.diag(1,2,3)
|-
| C = A' || CommonOps_DDRM.transpose(A,C)
|-
| A = A' || CommonOps_DDRM.transpose(A)
|-
| A = -A || CommonOps_DDRM.changeSign(A)
|-
| C = A * B || CommonOps_DDRM.mult(A,B,C)
|-
| C = A .* B || CommonOps_DDRM.elementMult(A,B,C)
|-
| A = A .* B || CommonOps_DDRM.elementMult(A,B)
|-
| C = A ./ B || CommonOps_DDRM.elementDiv(A,B,C)
|-
| A = A ./ B || CommonOps_DDRM.elementDiv(A,B)
|-
| C = A + B || CommonOps_DDRM.add(A,B,C)
|-
| C = A - B || CommonOps_DDRM.sub(A,B,C)
|-
| C = 2 * A || CommonOps_DDRM.scale(2,A,C)
|-
| A = 2 * A || CommonOps_DDRM.scale(2,A)
|-
| C = A / 2 || CommonOps_DDRM.divide(2,A,C)
|-
| A = A / 2 || CommonOps_DDRM.divide(2,A)
|-
| C = inv(A) || CommonOps_DDRM.invert(A,C)
|-
| A = inv(A) || CommonOps_DDRM.invert(A)
|-
| C = pinv(A) || CommonOps_DDRM.pinv(A)
|-
| C = trace(A) || C = CommonOps_DDRM.trace(A)
|-
| C = det(A) || C = CommonOps_DDRM.det(A)
|-
| C=kron(A,B) || CommonOps_DDRM.kron(A,B,C)
|-
| B=rref(A) || B = CommonOps_DDRM.rref(A,-1,null)
|-
| norm(A,"fro") || NormOps_DDRM.normf(A)
|-
| norm(A,1) || NormOps_DDRM.normP1(A)
|-
| norm(A,2) || NormOps_DDRM.normP2(A)
|-
| norm(A,Inf) || NormOps_DDRM.normPInf(A)
|-
| max(abs(A(:))) || CommonOps_DDRM.elementMaxAbs(A)
|-
| sum(A(:)) || CommonOps_DDRM.elementSum(A)
|-
| rank(A,tol) || svd.decompose(A); SingularOps_DDRM.rank(svd,tol)
|-
| [U,S,V] = svd(A) || DecompositionFactory_DDRM.svd(A.numRows,A.numCols,true,true,false)
|-
| || SingularOps_DDRM.descendingOrder(U,false,S,V,false)
|-
| [U,S,V] = svd(A,0) || DecompositionFactory_DDRM.svd(A.numRows,A.numCols,true,true,true)
|-
| || SingularOps_DDRM.descendingOrder(U,false,S,V,false)
|-
| S = svd(A) || DecompositionFactory_DDRM.svd(A.numRows,A.numCols,false,false,true)
|-
| [V,D] = eig(A) || eig = DecompositionFactory_DDRM.eig(A.numCols); eig.decompose(A)
|-
| || V = EigenOps_DDRM.createMatrixV(eig); D = EigenOps_DDRM.createMatrixD(eig)
|-
| [Q,R] = qr(A) || decomp = DecompositionFactory_DDRM.qr(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,false); R = decomp.getR(null,false)
|-
| [Q,R] = qr(A,0) || decomp = DecompositionFactory_DDRM.qr(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,true); R = decomp.getR(null,true)
|-
| [Q,R,P] = qr(A) || decomp = DecompositionFactory_DDRM.qrp(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,false); R = decomp.getR(null,false)
|-
| || P = decomp.getPivotMatrix(null)
|-
| [Q,R,P] = qr(A,0) || decomp = DecompositionFactory_DDRM.qrp(A.numRows,A.numCols)
|-
| || Q = decomp.getQ(null,true); R = decomp.getR(null,true)
|-
| || P = decomp.getPivotMatrix(null)
|-
| R = chol(A) || DecompositionFactory_DDRM.chol(A.numCols,false)
|-
| [L,U,P] = lu(A) ||DecompositionFactory_DDRM.lu(A.numCols)
|}
9e7d685b7c317b802bea6035fcc20de9c8f4d490
Change Log
0
35
217
159
2017-05-18T05:08:33Z
Peter
1
wikitext
text/x-wiki
== Version 0.31 ==
2017/05/18
* Changed minimum Java version from 6 to 7
* Added SimpleEVD.getEigenvalues()
* Added SimpleSVD.getSingularValues()
* Fixed issue with generics and SimpleEVD and SimpleSVD
* Auto generated float 32-bit support of all 64-bit code
* SimpleMatrix
** Added support for float 32-bit matrices
** Replaced extractDiag() with diag() and changed behavior.
* Fixed Sized Matrix
** Added MatrixFeatures
** Added NormOps
** FixedOps
*** Discovered a bug in a unit test
*** Fixed bugs in elementAbsMax() elementAbsMin() trace()
*** Improved the speed of element-wise max and min operations
* New naming for matrices (see readme)
* New naming for operation classes (see readme)
* Operations API
** added minCol(), maxCol(), minRow(), maxRow()
* Sparse matrix support for real values
** Compressed Sparse Column (CSC) a.k.a. Compressed Column
** Triplet
** Basic operation up to triangular solve
* A script has been provided that will perform most of the refactorings:
** convert_to_ejml31.py
* Fixed a minor printing glitch for dense matrices. There was an extra space
* Equations
** Assignment to a submatrix now works with variables
*** A((2+i):10,1:20) = 5 <--* this works now
** Added sum(), sum(A,0), sum(A,1)
** min(A,0), max(A,0), min(A,1), max(A,1),
* Modules now have "ejml-" as a suffice to avoid collisions with other libraries
* equations module has been moved into ejml-simple for dependency reasons
b2658a3bcc8316b8d0abc5ad96ff5a08464cef58
Tutorial Complex
0
21
218
63
2017-05-18T05:16:27Z
Peter
1
wikitext
text/x-wiki
Most real operations in EJML have a complex analog. For example, CommonOps_DDRM has the complex equivalent of CommonOps_ZDRM. ZMatrixRMaj is the complex analog for DMatrixRMaj. Floats are also support, e.g. FDRM -> CDRM. See tutorial on [[Procedural|procedural interface]] for a table of suffixes.
The following specialized functions are contained inside the complex CommonOps class. They different ways to convert one matrix type into the other.
{| class="wikitable"
! Function !! Description
|-
| CommonOps_ZDRM.convert() || Converts a real matrix into a complex matrix
|-
| CommonOps_ZDRM.stripReal() || Strips the real component and places it into a real matrix.
|-
| CommonOps_ZDRM.stripImaginary() || Strips the imaginary component and places it into a real matrix.
|-
| CommonOps_ZDRM.magnitude() || Computes the magnitude of each element and places it into a real matrix.
|}
There is also Complex64F which contains a single complex number. [[Example Complex Math]] does a good job covering how to manipulate those objects.
9508beff41ce3d98a9e54645e7dd49a9d9984f4f
Manual
0
8
219
163
2017-05-18T05:16:44Z
Peter
1
wikitext
text/x-wiki
= The Basics =
Efficient Java Matrix Library (EJML) is a Java library for performing standard linear algebra operations on dense matrices. Typically the list of standard operations is divided up unto basic (addition, subtraction, multiplication, ...etc), decompositions (LU, QR, SVD, ... etc), and solving linear systems. A complete list of its core functionality can be found on the [[Capabilities]] page.
This manual describes how to use and develop an application using EJML. Other questions, like how to build or include it in your project, is provided in the list below. If you have a question which isn't answered or is confusion feel free to post a question on the message board! Instructions on how to use EJML is primarily done in this manual through example, see below. The examples are selected from common real-world problems, such as Kalman filters. Some times the same example is provided in three different formats using one of the three interfaces provided in EJML to help you understand the differences.
* [[Download|Download and Building]]
* [[Frequently Asked Questions|Frequently Asked Questions]]
* [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
EJML is compatible with Java 1.7 and beyond.
== The Interfaces ==
A primary design goal of EJML was to provide users the capability to write both highly optimized code and easy to read/write code. Since it's hard to do this with a single API BoofCV provides three different ways to interact with it.
* [[Procedural]]: You have full access to all of EJML's capabilities, can select individual algorithms, and almost complete control over memory. The downside is it feels a bit like you're programming in assembly and it's tedious to have that much control over memory.
* [[SimpleMatrix]]: An object oriented API that allows you to connect multiple operations together using a flow strategy, which is much easier to read and write. Limited subset of operations are supported and memory is constantly created and destroyed.
* [[Equations]]: Is a symbolic interface that allows you to manipulate matrices in a similar manor to Matlab/Octave. Can be precompiled and won't declare new memory if the input size doesn't change. It's a bit of a black box and the compiler isn't smart enough to pick the most efficient functions.
Example of compute the Kalman gain "K"
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
It's hard to say which interface is the best. If you are dealing with small matrices and need to write highly optimized code then ''Procedural'' is the way to go. For large matrices it doesn't really matter which one you use since the overhead is insignificant compared to the matrix operations. If you want to write something quickly then [[SimpleMatrix]] or [[Equations]] is the way to go. For those of you who are concerned about performance, I recommend coding it up first using SimpleMatrix or Equations then benchmarking to see if that code is a bottleneck. Much easier to debug that way.
[[Performance|Comparison of Interface Runtime Performance]]
= Tutorials =
* [[Matlab to EJML|Matlab to EJML]]
* [[Tutorial Complex|Complex]]
* [[Solving Linear Systems|Solving Linear Systems]]
* [[Matrix Decompositions|Matrix Decompositions]]
* [[Random matrices, Matrix Features, and Matrix Norms]]
* [[Extract and Insert|Extracting and Inserting submatrices and vectors]]
* [[Input and Output|Matrix Input/Output]]
* [[Unit Testing]]
= Example Code =
The follow are code examples of common linear algebra problems intended to demonstrate different parts of EJML. In the table below it indicates which interface or interfaces the example uses.
{| class="wikitable" border="1" |
! Name !! Procedural !! SimpleMatrix !! Equations
|-
| [[Example Kalman Filter|Kalman Filter]] || X || X || X
|-
| [[Example Levenberg-Marquardt|Levenberg-Marquardt]] || X || ||
|-
| [[Example Principal Component Analysis|Principal Component Analysis]] || X || ||
|-
| [[Example Polynomial Fitting|Polynomial Fitting]] || X || ||
|-
| [[Example Polynomial Roots|Polynomial Roots]] || X || ||
|-
| [[Example Customizing Equations|Customizing Equations]] || || || X
|-
| [[Example Customizing SimpleMatrix|Customizing SimpleMatrix]] || || X ||
|-
| [[Example Fixed Sized Matrices|Fixed Sized Matrices]] || X || ||
|-
| [[Example Complex Math|Complex Math]] || X || ||
|-
| [[Example Complex Matrices|Complex Matrices]] || X || ||
|}
= External References =
Want to learn more about how EJML works to write more effective code and employ more advanced techniques? Understand where EJML's logo comes from? The following books are recommended reading and made EJML's early development possible.
* Best introduction to the subject that balances clarity without sacrificing important implementation details:
** [http://amzn.to/2hWEo8N Fundamentals of Matrix Computations by David S. Watkins]
* Classic reference book that tersely covers hundreds of algorithms
** [http://amzn.to/2h3apra Matrix Computations by G. Golub and C. Van Loan]
* Popular book on linear algebra
** [http://amzn.to/2hbeGMG6 Linear Algebra and Its Applications by Gilbert Strang]
Purchasing through these links will help EJML's developer buy high end ramen noodles.
a409fc581ed7127f48b24c6790f8863e5dd05a67
231
219
2017-05-18T17:42:42Z
Peter
1
/* External References */
wikitext
text/x-wiki
= The Basics =
Efficient Java Matrix Library (EJML) is a Java library for performing standard linear algebra operations on dense matrices. Typically the list of standard operations is divided up unto basic (addition, subtraction, multiplication, ...etc), decompositions (LU, QR, SVD, ... etc), and solving linear systems. A complete list of its core functionality can be found on the [[Capabilities]] page.
This manual describes how to use and develop an application using EJML. Other questions, like how to build or include it in your project, is provided in the list below. If you have a question which isn't answered or is confusion feel free to post a question on the message board! Instructions on how to use EJML is primarily done in this manual through example, see below. The examples are selected from common real-world problems, such as Kalman filters. Some times the same example is provided in three different formats using one of the three interfaces provided in EJML to help you understand the differences.
* [[Download|Download and Building]]
* [[Frequently Asked Questions|Frequently Asked Questions]]
* [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
EJML is compatible with Java 1.7 and beyond.
== The Interfaces ==
A primary design goal of EJML was to provide users the capability to write both highly optimized code and easy to read/write code. Since it's hard to do this with a single API BoofCV provides three different ways to interact with it.
* [[Procedural]]: You have full access to all of EJML's capabilities, can select individual algorithms, and almost complete control over memory. The downside is it feels a bit like you're programming in assembly and it's tedious to have that much control over memory.
* [[SimpleMatrix]]: An object oriented API that allows you to connect multiple operations together using a flow strategy, which is much easier to read and write. Limited subset of operations are supported and memory is constantly created and destroyed.
* [[Equations]]: Is a symbolic interface that allows you to manipulate matrices in a similar manor to Matlab/Octave. Can be precompiled and won't declare new memory if the input size doesn't change. It's a bit of a black box and the compiler isn't smart enough to pick the most efficient functions.
Example of compute the Kalman gain "K"
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
It's hard to say which interface is the best. If you are dealing with small matrices and need to write highly optimized code then ''Procedural'' is the way to go. For large matrices it doesn't really matter which one you use since the overhead is insignificant compared to the matrix operations. If you want to write something quickly then [[SimpleMatrix]] or [[Equations]] is the way to go. For those of you who are concerned about performance, I recommend coding it up first using SimpleMatrix or Equations then benchmarking to see if that code is a bottleneck. Much easier to debug that way.
[[Performance|Comparison of Interface Runtime Performance]]
= Tutorials =
* [[Matlab to EJML|Matlab to EJML]]
* [[Tutorial Complex|Complex]]
* [[Solving Linear Systems|Solving Linear Systems]]
* [[Matrix Decompositions|Matrix Decompositions]]
* [[Random matrices, Matrix Features, and Matrix Norms]]
* [[Extract and Insert|Extracting and Inserting submatrices and vectors]]
* [[Input and Output|Matrix Input/Output]]
* [[Unit Testing]]
= Example Code =
The follow are code examples of common linear algebra problems intended to demonstrate different parts of EJML. In the table below it indicates which interface or interfaces the example uses.
{| class="wikitable" border="1" |
! Name !! Procedural !! SimpleMatrix !! Equations
|-
| [[Example Kalman Filter|Kalman Filter]] || X || X || X
|-
| [[Example Levenberg-Marquardt|Levenberg-Marquardt]] || X || ||
|-
| [[Example Principal Component Analysis|Principal Component Analysis]] || X || ||
|-
| [[Example Polynomial Fitting|Polynomial Fitting]] || X || ||
|-
| [[Example Polynomial Roots|Polynomial Roots]] || X || ||
|-
| [[Example Customizing Equations|Customizing Equations]] || || || X
|-
| [[Example Customizing SimpleMatrix|Customizing SimpleMatrix]] || || X ||
|-
| [[Example Fixed Sized Matrices|Fixed Sized Matrices]] || X || ||
|-
| [[Example Complex Math|Complex Math]] || X || ||
|-
| [[Example Complex Matrices|Complex Matrices]] || X || ||
|}
= External References =
Want to learn more about how EJML works to write more effective code and employ more advanced techniques? Understand where EJML's logo comes from? The following books are recommended reading and made EJML's early development possible.
* Best introduction to the subject that balances clarity without sacrificing important implementation details:
** Fundamentals of Matrix Computations by David S. Watkins
* Classic reference book that tersely covers hundreds of algorithms
** Matrix Computations by G. Golub and C. Van Loan
* Direct Methods for Sparse Linear Systems by Timothy A. Davis
** Covers the sparse algorithms used in EJML
* Popular book on linear algebra
** Linear Algebra and Its Applications by Gilbert Strang
cb9b0921a7994a92d8a20ae2237bbfc41bfd92c4
248
231
2017-09-18T15:19:28Z
Peter
1
/* Example Code */
wikitext
text/x-wiki
= The Basics =
Efficient Java Matrix Library (EJML) is a Java library for performing standard linear algebra operations on dense matrices. Typically the list of standard operations is divided up unto basic (addition, subtraction, multiplication, ...etc), decompositions (LU, QR, SVD, ... etc), and solving linear systems. A complete list of its core functionality can be found on the [[Capabilities]] page.
This manual describes how to use and develop an application using EJML. Other questions, like how to build or include it in your project, is provided in the list below. If you have a question which isn't answered or is confusion feel free to post a question on the message board! Instructions on how to use EJML is primarily done in this manual through example, see below. The examples are selected from common real-world problems, such as Kalman filters. Some times the same example is provided in three different formats using one of the three interfaces provided in EJML to help you understand the differences.
* [[Download|Download and Building]]
* [[Frequently Asked Questions|Frequently Asked Questions]]
* [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
EJML is compatible with Java 1.7 and beyond.
== The Interfaces ==
A primary design goal of EJML was to provide users the capability to write both highly optimized code and easy to read/write code. Since it's hard to do this with a single API BoofCV provides three different ways to interact with it.
* [[Procedural]]: You have full access to all of EJML's capabilities, can select individual algorithms, and almost complete control over memory. The downside is it feels a bit like you're programming in assembly and it's tedious to have that much control over memory.
* [[SimpleMatrix]]: An object oriented API that allows you to connect multiple operations together using a flow strategy, which is much easier to read and write. Limited subset of operations are supported and memory is constantly created and destroyed.
* [[Equations]]: Is a symbolic interface that allows you to manipulate matrices in a similar manor to Matlab/Octave. Can be precompiled and won't declare new memory if the input size doesn't change. It's a bit of a black box and the compiler isn't smart enough to pick the most efficient functions.
Example of compute the Kalman gain "K"
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
It's hard to say which interface is the best. If you are dealing with small matrices and need to write highly optimized code then ''Procedural'' is the way to go. For large matrices it doesn't really matter which one you use since the overhead is insignificant compared to the matrix operations. If you want to write something quickly then [[SimpleMatrix]] or [[Equations]] is the way to go. For those of you who are concerned about performance, I recommend coding it up first using SimpleMatrix or Equations then benchmarking to see if that code is a bottleneck. Much easier to debug that way.
[[Performance|Comparison of Interface Runtime Performance]]
= Tutorials =
* [[Matlab to EJML|Matlab to EJML]]
* [[Tutorial Complex|Complex]]
* [[Solving Linear Systems|Solving Linear Systems]]
* [[Matrix Decompositions|Matrix Decompositions]]
* [[Random matrices, Matrix Features, and Matrix Norms]]
* [[Extract and Insert|Extracting and Inserting submatrices and vectors]]
* [[Input and Output|Matrix Input/Output]]
* [[Unit Testing]]
= Example Code =
The follow are code examples of common linear algebra problems intended to demonstrate different parts of EJML. In the table below it indicates which interface or interfaces the example uses.
{| class="wikitable" border="1" |
! Name !! Procedural !! SimpleMatrix !! Equations
|-
| [[Example Kalman Filter|Kalman Filter]] || X || X || X
|-
| [[Example Sparse Matrices|Sparse Matrix Basics]] || X || ||
|-
| [[Example Levenberg-Marquardt|Levenberg-Marquardt]] || X || ||
|-
| [[Example Principal Component Analysis|Principal Component Analysis]] || X || ||
|-
| [[Example Polynomial Fitting|Polynomial Fitting]] || X || ||
|-
| [[Example Polynomial Roots|Polynomial Roots]] || X || ||
|-
| [[Example Customizing Equations|Customizing Equations]] || || || X
|-
| [[Example Customizing SimpleMatrix|Customizing SimpleMatrix]] || || X ||
|-
| [[Example Fixed Sized Matrices|Fixed Sized Matrices]] || X || ||
|-
| [[Example Complex Math|Complex Math]] || X || ||
|-
| [[Example Complex Matrices|Complex Matrices]] || X || ||
|}
= External References =
Want to learn more about how EJML works to write more effective code and employ more advanced techniques? Understand where EJML's logo comes from? The following books are recommended reading and made EJML's early development possible.
* Best introduction to the subject that balances clarity without sacrificing important implementation details:
** Fundamentals of Matrix Computations by David S. Watkins
* Classic reference book that tersely covers hundreds of algorithms
** Matrix Computations by G. Golub and C. Van Loan
* Direct Methods for Sparse Linear Systems by Timothy A. Davis
** Covers the sparse algorithms used in EJML
* Popular book on linear algebra
** Linear Algebra and Its Applications by Gilbert Strang
e6a8ca8977ec5e35c511e9be56c9daa10cecf8c6
Solving Linear Systems
0
20
221
58
2017-05-18T14:15:33Z
Peter
1
wikitext
text/x-wiki
A fundamental problem in linear algebra is solving systems of linear equations. A linear system is any equation than can be expressed in this format:
<pre>
A*x = b
</pre>
where ''A'' is m by n, ''x'' is n by o, and ''b'' is m by o. Most of the time o=1. The best way to solve these equations depends on the structure of the matrix ''A''. For example, if it's square and positive definite then [http://ejml.org/javadoc/org/ejml/interfaces/decomposition/CholeskyDecomposition.html Cholesky] decomposition is the way to go. On the other hand if it is tall m > n, then [http://ejml.org/javadoc/org/ejml/interfaces/decomposition/QRDecomposition.html QR] is the way to go.
Each of the three interfaces (Procedural, SimpleMatrix, Equations) provides high level ways to solve linear systems which don't require you to specify the underlying algorithm. While convenient, these are not always the best approach in high performance situations. They create/destroy memory and don't provide you with access to their full functionality. If the best performance is needed then you should use a LinearSolver or one of its derived interfaces for a specific family of algorithms.
First a description is provided on how to solve linear systems using Procedural, SimpleMatrix, and then Equations. After that an overview of LinearSolver is presented.
= High Level Interfaces =
All high level interfaces essentially use the same code at the low level, which is the Procedural interface. This means that they have the same strengths and weaknesses. Their strength is simplicity. They will automatically select LU and QR decomposition, depending on the matrix's shape.
You should use the lower level LinearSolver if any of the following are true:
* Your matrix can some times be singular
* You wish to perform a pseudo inverse
* You need to avoid creating new memory
* You need to select a specific decomposition
* You need access to the low level decomposition
The case of singular or nearly singular matrices is worth discussing more. All of these high level approaches do attempt to detect singular matrices. The problem is that they aren't reliable and no tweaking of thresholds will make them reliable. If you are in a situation where you need to come up with a solution and it might be singular then you really need to know what you are doing. If a system is singular it means there are an infinite number of solutions.
== Procedural ==
The way to solve linear systems in the Procedural interface is with CommonOps.solve(). Make sure you check it's return value to see if it failed! It ''might'' fail if the matrix is singular or nearly singular.
<syntaxhighlight lang="java">
if( !CommonOps_DDRM.solve(A,b,x) ) {
throw new IllegalArgument("Singular matrix");
}
</syntaxhighlight>
== SimpleMatrix ==
<syntaxhighlight lang="java">
try {
SimpleMatrix x = A.solve(b);
} catch ( SingularMatrixException e ) {
throw new IllegalArgument("Singular matrix");
}
</syntaxhighlight>
SingularMatrixException is a RuntimeException and you technically don't have to catch it. If you don't catch it, it will take down your whole application if the matrix is singular!
== Equations ==
<syntaxhighlight lang="java">
eq.process("x=b/A");
</syntaxhighlight>
If it's singular it will throw a RuntimeException.
= Low level Linear Solvers =
Low level linear solvers in EJML all implement the {{DocLink|org/ejml/interfaces/linsol/LinearSolver.html|LinearSolver}} interface. It provides a lot more power than the high level interfaces but is also more difficult to use and require more diligence. For example, you can no longer assume that it won't modify the input matrices!
== LinearSolver ==
The LinearSolver interface is designed to be easy to use and to provide most of the power that comes from directly using a decomposition would provide.
<syntaxhighlight lang="java">
public interface LinearSolver< T extends Matrix> {
public boolean setA( T A );
public T getA();
public double quality();
public void solve( T B , T X );
public void invert( T A_inv );
public boolean modifiesA();
public boolean modifiesB();
public <D extends DecompositionInterface>D getDecomposition();
}
</syntaxhighlight>
Each linear solver implementation is built around a different decomposition. The best way to create a new LinearSolver instance is with {{DocLink|javadoc/org/ejml/dense/row/factory/LinearSolverFactory_DDRM.html|LinearSolverFactory_DDRM}}. It provides an easy way to select the correct solver without plowing through the documentation.
Two steps are required to solve a system with a LinearSolver, as is shown below:
<syntaxhighlight lang="java">
LinearSolver<DMatrixRMaj> solver = LinearSolverFactory_DDRM.qr(A.numRows,A.numCols);
if( !solver.setA(A) ) {
throw new IllegalArgument("Singular matrix");
}
if( solver.quality() <= 1e-8 )
throw new IllegalArgument("Nearly singular matrix");
solver.solve(b,x);
</syntaxhighlight>
As with the high-level interfaces you can't trust algorithms such as QR, LU, or Cholesky to detect singular matrices! Sometimes they will work and sometimes they will not. Even adjusting the quality threshold won't fix the problem in all situations.
Additional capabilities included in LinearSolver are:
* invert()
** Will invert a matrix more efficiently than solve() can.
* quality()
** Returns a positive number which if it is small indicates a singular or nearly singular system system. Much faster to compute than the SVD.
* modifiesA() and modifiesB()
** To reduce memory requirements, most LinearSolvers will modify the 'A' and store the decomposition inside of it. Some do the same for 'B' These function tell the user if the inputs are being modified or not.
* getDecomposition()
** Provides access to the internal decomposition used.
== LinearSolverSafe ==
If the input matrices 'A' and 'B' should not be modified then LinearSolverSafe is a convenient way to ensure that precondition:
<syntaxhighlight lang="java">
LinearSolver<DMatrixRMaj> solver = LinearSolverFactory_DDRM.leastSquares();
solver = new LinearSolverSafe<DMatrixRMaj>(solver);
</syntaxhighlight>
== Pseudo Inverse ==
EJML provides two different pseudo inverses. One is SVD based and the other QRP. QRP stands for QR with column pivots. QRP can be thought of as a light weight SVD, much faster to compute but doesn't handle singular matrices quite as well.
<syntaxhighlight lang="java">
LinearSolver<DMatrixRMaj> pinv = LinearSolverFactory_DDRM.pseudoInverse(true);
</syntaxhighlight>
This will create an SVD based pseudo inverse. Otherwise if you specify false then it will create a QRP pseudo-inverse.
== AdjustableLinearSolver ==
In situations where rows from the linear system are added or removed (see [[Example Polynomial Fitting]]) an AdjustableLinearSolver_DDRM can be used to efficiently resolve the modified system. AdjustableLinearSolver_DDRM is an extension of LinearSolver that adds addRowToA() and removeRowFromA(). These functions add and remove rows from A respectively. After being involved the solution can be recomputed by calling solve() again.
<syntaxhighlight lang="java">
AdjustableLinearSolver_DDRM solver = LinearSolverFactory_DDRM.adjustable();
if( !solver.setA(A) ) {
throw new IllegalArgument("Singular matrix");
}
solver.solve(b,x);
// add a row
double row[] = new double[N];
... code ...
solver.addRowToA(row,2);
.... adjust b and x ....
solver.solve(b,x);
// remove a row
solver.removeRowFromA(7);
.... adjust b and x ....
solver.solve(b,x);
</syntaxhighlight>
fd00d989c5dfbc629f00152533149df205fcab1f
Matrix Decompositions
0
26
224
157
2017-05-18T15:23:41Z
Peter
1
wikitext
text/x-wiki
#summary How to perform common matrix decompositions in EJML
= Introduction =
Matrix decomposition are used to reduce a matrix to a more simplic format which can be easily solved and used to extract characteristics from. Below is a list of matrix decompositions and data structures there are implementations for.
{| class="wikitable"
! Decomposition !! DMatrixRMaj !! DMatrixRBlock !! ZMatrixRMaj
|-
| LU || Yes || || Yes
|-
| Cholesky L`*`L<sup>T</sup> and R<sup>T</sup>`*`R || Yes || Yes || Yes
|-
| Cholesky L`*`D`*`L<sup>T</sup> || Yes || ||
|-
| QR || Yes || Yes || Yes
|-
| QR Column Pivot || Yes || ||
|-
| Singular Value Decomposition (SVD) || Yes || ||
|-
| Generalized Eigen Value || Yes || ||
|-
| Symmetric Eigen Value || Yes || Yes ||
|-
| Bidiagonal || Yes || ||
|-
| Tridiagonal || Yes || Yes || Yes
|-
| Hessenberg || Yes || || Yes
|}
= Solving Using Decompositions =
Decompositions, such as LU and QR, are used to solve a linear system. A common mistake in EJML is to directly decompose the matrix instead of using a LinearSolver. LinearSolvers simplify the process of solving a linear system, are very fast, and will automatically be updated as new algorithms are added. It is recommended that you use them whenever possible.
For more information on LinearSolvers see the wikipage at [[Solving Linear Systems]].
= SimpleMatrix =
SimpleMatrix has easy to an use interface built in for SVD and EVD:
<syntaxhighlight lang="java">
SimpleSVD svd = A.svd();
SimpleEVD evd = A.eig();
SimpleMatrix U = svd.getU();
</syntaxhighlight>
where A is a SimpleMatrix.
As with most operators in SimpleMatrix, it is possible to chain a decompositions with other commands. For instance, to print the singular values in a matrix:
<syntaxhighlight lang="java">
A.svd().getW().extractDiag().transpose().print();
</syntaxhighlight>
Other decompositions can be performed by using accessing the internal DMatrixRMaj and using the decompositions shown in the following section below. The following is an example of applying a Cholesky decomposition.
<syntaxhighlight lang="java">
CholeskyDecomposition_F64<DMatrixRMaj> chol = DecompositionFactory_DDRM.chol(A.numRows(),true);
if( !chol.decompose(A.getMatrix()))
throw new RuntimeException("Cholesky failed!");
SimpleMatrix L = SimpleMatrix.wrap(chol.getT(null));
</syntaxhighlight>
= DecompositionFactory =
The best way to create a matrix decomposition is by using DecompositionFactory_DDRM. Directly instantiating a decomposition is discouraged because of the added complexity. DecompositionFactory_DDRM is updated as new and faster algorithms are added.
<syntaxhighlight lang="java">
public interface DecompositionInterface<T extends Matrix> {
/**
* Computes the decomposition of the input matrix. Depending on the implementation
* the input matrix might be stored internally or modified. If it is modified then
* the function {@link #inputModified()} will return true and the matrix should not be
* modified until the decomposition is no longer needed.
*
* @param orig The matrix which is being decomposed. Modification is implementation dependent.
* @return Returns if it was able to decompose the matrix.
*/
public boolean decompose( T orig );
/**
* Is the input matrix to {@link #decompose(org.ejml.data.DMatrixRMaj)} is modified during
* the decomposition process.
*
* @return true if the input matrix to decompose() is modified.
*/
public boolean inputModified();
}
</syntaxhighlight>
Most decompositions in EJML implement DecompositionInterface. To decompose "A" matrix simply call decompose(A). It returns true if there are no error while decomposing and false otherwise. While in general you can trust the results if true is returned some algorithms can have faults that are not reported. This is true for all linear algebra libraries.
To minimize memory usage, most decompositions will modify the original matrix passed into decompose(). Call inputModified() to determine if the input matrix is modified or not. If it is modified, and you do not wish it to be modified, just pass in a copy of the original instead.
Below is an example of how to compute the SVD of a matrix:
<syntaxhighlight lang="java">
void decompositionExample( DenseMatrix64F A ) {
SingularValueDecomposition_F64<DMatrixRMaj> svd = DecompositionFactory_DDRM.svd(A.numRows,A.numCols);
if( !svd.decompose(A) )
throw new RuntimeException("Decomposition failed");
DMatrixRMaj U = svd.getU(null,false);
DMatrixRMaj W = svd.getW(null);
DMatrixRMaj V = svd.getV(null,false);
}
</syntaxhighlight>
Note how it checks the returned value from decompose.
In addition DecompositionFactory_DDRM provides functions for computing the quality of a decomposition. Being able measure the decomposition's quality is an important way to validate its correctness. It works by "reconstructing" the original matrix then computing the difference between the reconstruction and the original. Smaller the quality is the better the decomposition is. With an ideal value of being around 1e-15 in most cases.
<syntaxhighlight lang="java">
if( DecompositionFactory_DDRM.quality(A,svd) > 1e-3 )
throw new RuntimeException("Bad decomposition");
</syntaxhighlight>
List of functions in DecompositionFactory_DDRM
{| class="wikitable"
! Decomposition !! Code
|-
| LU || DecompositionFactory_DDRM.lu()
|-
| QR || DecompositionFactory_DDRM.qr()
|-
| QRP || DecompositionFactory_DDRM.qrp()
|-
| Cholesky || DecompositionFactory_DDRM.chol()
|-
| Cholesky LDL || DecompositionFactory_DDRM.cholLDL()
|-
| SVD || DecompositionFactory_DDRM.svd()
|-
| Eigen || DecompositionFactory_DDRM.eig()
|}
= Helper Functions for SVD and Eigen =
Two classes SingularOps_DDRM and EigenOps_DDRM have been provided for extracting useful information from these decompositions or to provide highly specialized ways of computing the decompositions. Below is a list of more common uses of these functions:
SingularOps_DDRM
*descendingOrder()
**In EJML the ordering of the returned singular values is not in general guaranteed. This function will reorder the U,W,V matrices such that the singular values are in the standard largest to smallest ordering.
*nullSpace()
**Computes the null space from the provided decomposition.
*rank()
**Returns the matrix's rank.
*nullity()
**Returns the matrix's nullity.
EigenOps_DDRM
*computeEigenValue()
**Given an eigen vector compute its eigenvalue.
*computeEigenVector()
**Given an eigenvalue compute its eigenvector.
*boundLargestEigenValue()
**Returns a lower and upper bound for the largest eigenvalue.
*createMatrixD() and createMatrixV()
**Reformats the results such that two matrices (D and V) contain the eigenvalues and eigenvectors respectively. This is similar to the format used by other libraries such as Jama.
e2ab56d499afa2612760dab143f1e4574219d0ac
254
224
2018-03-19T12:55:49Z
Peter
1
wikitext
text/x-wiki
#summary How to perform common matrix decompositions in EJML
= Introduction =
Matrix decomposition are used to reduce a matrix to a more simplic format which can be easily solved and used to extract characteristics from. Below is a list of matrix decompositions and data structures there are implementations for.
{| class="wikitable"
! Decomposition !! DMatrixRMaj !! DMatrixRBlock !! ZMatrixRMaj
|-
| LU || Yes || || Yes
|-
| Cholesky L`*`L<sup>T</sup> and R<sup>T</sup>`*`R || Yes || Yes || Yes
|-
| Cholesky L`*`D`*`L<sup>T</sup> || Yes || ||
|-
| QR || Yes || Yes || Yes
|-
| QR Column Pivot || Yes || ||
|-
| Singular Value Decomposition (SVD) || Yes || ||
|-
| Generalized Eigen Value || Yes || ||
|-
| Symmetric Eigen Value || Yes || Yes ||
|-
| Bidiagonal || Yes || ||
|-
| Tridiagonal || Yes || Yes || Yes
|-
| Hessenberg || Yes || || Yes
|}
= Accessing Pivots and Other Internal Structures =
Most of the time you don't need access to internal data structures for a decomposition, just the results of the decomposition. If you need access to information, like the row or column pivots, then you will want to use a decomposition interface. Those can be created in a DecompositionFactory. For example, LUDecomposition provides access to its row pivots.
= Solving Using Decompositions =
Decompositions, such as LU and QR, are used to solve a linear system. A common mistake in EJML is to directly decompose the matrix instead of using a LinearSolver. LinearSolvers simplify the process of solving a linear system, are very fast, and will automatically be updated as new algorithms are added. It is recommended that you use them whenever possible.
For more information on LinearSolvers see the wikipage at [[Solving Linear Systems]].
= SimpleMatrix =
SimpleMatrix has easy to an use interface built in for SVD and EVD:
<syntaxhighlight lang="java">
SimpleSVD svd = A.svd();
SimpleEVD evd = A.eig();
SimpleMatrix U = svd.getU();
</syntaxhighlight>
where A is a SimpleMatrix.
As with most operators in SimpleMatrix, it is possible to chain a decompositions with other commands. For instance, to print the singular values in a matrix:
<syntaxhighlight lang="java">
A.svd().getW().extractDiag().transpose().print();
</syntaxhighlight>
Other decompositions can be performed by using accessing the internal DMatrixRMaj and using the decompositions shown in the following section below. The following is an example of applying a Cholesky decomposition.
<syntaxhighlight lang="java">
CholeskyDecomposition_F64<DMatrixRMaj> chol = DecompositionFactory_DDRM.chol(A.numRows(),true);
if( !chol.decompose(A.getMatrix()))
throw new RuntimeException("Cholesky failed!");
SimpleMatrix L = SimpleMatrix.wrap(chol.getT(null));
</syntaxhighlight>
= DecompositionFactory =
The best way to create a matrix decomposition is by using DecompositionFactory_DDRM. Directly instantiating a decomposition is discouraged because of the added complexity. DecompositionFactory_DDRM is updated as new and faster algorithms are added.
<syntaxhighlight lang="java">
public interface DecompositionInterface<T extends Matrix> {
/**
* Computes the decomposition of the input matrix. Depending on the implementation
* the input matrix might be stored internally or modified. If it is modified then
* the function {@link #inputModified()} will return true and the matrix should not be
* modified until the decomposition is no longer needed.
*
* @param orig The matrix which is being decomposed. Modification is implementation dependent.
* @return Returns if it was able to decompose the matrix.
*/
public boolean decompose( T orig );
/**
* Is the input matrix to {@link #decompose(org.ejml.data.DMatrixRMaj)} is modified during
* the decomposition process.
*
* @return true if the input matrix to decompose() is modified.
*/
public boolean inputModified();
}
</syntaxhighlight>
Most decompositions in EJML implement DecompositionInterface. To decompose "A" matrix simply call decompose(A). It returns true if there are no error while decomposing and false otherwise. While in general you can trust the results if true is returned some algorithms can have faults that are not reported. This is true for all linear algebra libraries.
To minimize memory usage, most decompositions will modify the original matrix passed into decompose(). Call inputModified() to determine if the input matrix is modified or not. If it is modified, and you do not wish it to be modified, just pass in a copy of the original instead.
Below is an example of how to compute the SVD of a matrix:
<syntaxhighlight lang="java">
void decompositionExample( DenseMatrix64F A ) {
SingularValueDecomposition_F64<DMatrixRMaj> svd = DecompositionFactory_DDRM.svd(A.numRows,A.numCols);
if( !svd.decompose(A) )
throw new RuntimeException("Decomposition failed");
DMatrixRMaj U = svd.getU(null,false);
DMatrixRMaj W = svd.getW(null);
DMatrixRMaj V = svd.getV(null,false);
}
</syntaxhighlight>
Note how it checks the returned value from decompose.
In addition DecompositionFactory_DDRM provides functions for computing the quality of a decomposition. Being able measure the decomposition's quality is an important way to validate its correctness. It works by "reconstructing" the original matrix then computing the difference between the reconstruction and the original. Smaller the quality is the better the decomposition is. With an ideal value of being around 1e-15 in most cases.
<syntaxhighlight lang="java">
if( DecompositionFactory_DDRM.quality(A,svd) > 1e-3 )
throw new RuntimeException("Bad decomposition");
</syntaxhighlight>
List of functions in DecompositionFactory_DDRM
{| class="wikitable"
! Decomposition !! Code
|-
| LU || DecompositionFactory_DDRM.lu()
|-
| QR || DecompositionFactory_DDRM.qr()
|-
| QRP || DecompositionFactory_DDRM.qrp()
|-
| Cholesky || DecompositionFactory_DDRM.chol()
|-
| Cholesky LDL || DecompositionFactory_DDRM.cholLDL()
|-
| SVD || DecompositionFactory_DDRM.svd()
|-
| Eigen || DecompositionFactory_DDRM.eig()
|}
= Helper Functions for SVD and Eigen =
Two classes SingularOps_DDRM and EigenOps_DDRM have been provided for extracting useful information from these decompositions or to provide highly specialized ways of computing the decompositions. Below is a list of more common uses of these functions:
SingularOps_DDRM
*descendingOrder()
**In EJML the ordering of the returned singular values is not in general guaranteed. This function will reorder the U,W,V matrices such that the singular values are in the standard largest to smallest ordering.
*nullSpace()
**Computes the null space from the provided decomposition.
*rank()
**Returns the matrix's rank.
*nullity()
**Returns the matrix's nullity.
EigenOps_DDRM
*computeEigenValue()
**Given an eigen vector compute its eigenvalue.
*computeEigenVector()
**Given an eigenvalue compute its eigenvector.
*boundLargestEigenValue()
**Returns a lower and upper bound for the largest eigenvalue.
*createMatrixD() and createMatrixV()
**Reformats the results such that two matrices (D and V) contain the eigenvalues and eigenvectors respectively. This is similar to the format used by other libraries such as Jama.
fb9ee2758f522f656c1736ccb38353b75d047140
Main Page
0
1
225
204
2017-05-18T15:29:49Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.31''
|-
| '''Date:''' ''May 17, 2017''
|-
| [https://github.com/lessthanoptimal/ejml/blob/master/convert_to_ejml31.py Upgrade Script]
|-
| [[Change Log]]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) ) throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
09a8cbe62528b48cc496eeed54d07e0456475d1c
243
225
2017-05-25T03:55:53Z
Peter
1
/* Code Examples */
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.31''
|-
| '''Date:''' ''May 17, 2017''
|-
| [https://github.com/lessthanoptimal/ejml/blob/master/convert_to_ejml31.py Upgrade Script]
|-
| [[Change Log]]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
83777261054a0de222269b1a728f6b2b07dc7603
244
243
2017-09-18T14:23:27Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.32''
|-
| '''Date:''' ''September 18, 2017''
|-
| [https://github.com/lessthanoptimal/ejml/blob/master/convert_to_ejml31.py Upgrade Script]
|-
| [[Change Log]]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
09a41d10553f8901cab3f72046137a1a2550bb43
247
244
2017-09-18T15:13:04Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.32''
|-
| '''Date:''' ''September 18, 2017''
|-
| [https://github.com/lessthanoptimal/ejml/blob/master/convert_to_ejml31.py Upgrade Script]
|-
| [https://github.com/lessthanoptimal/ejml/blob/v0.32/change.txt Change Log]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
8bc9be2d245b3544f7d8b09cc61d4b4f2eaf1d81
250
247
2017-09-18T15:25:08Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating dense matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.32''
|-
| '''Date:''' ''September 18, 2017''
|-
| [https://github.com/lessthanoptimal/ejml/blob/master/convert_to_ejml31.py Upgrade Script]
|-
| [https://github.com/lessthanoptimal/ejml/blob/v0.32/change.txt Change Log]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
379b095bf5b1e2fad78a90921efb78dac063fcd7
251
250
2017-09-18T15:28:00Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating real/complex/dense/sparse matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.32''
|-
| '''Date:''' ''September 18, 2017''
|-
| [https://github.com/lessthanoptimal/ejml/blob/master/convert_to_ejml31.py Upgrade Script]
|-
| [https://github.com/lessthanoptimal/ejml/blob/v0.32/change.txt Change Log]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
fa885627596437a72a3e5bf417de176e5a3178ad
252
251
2018-01-17T16:29:22Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating real/complex/dense/sparse matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.33''
|-
| '''Date:''' ''January 17, 2018''
|-
| [https://github.com/lessthanoptimal/ejml/blob/master/convert_to_ejml31.py Upgrade Script]
|-
| [https://github.com/lessthanoptimal/ejml/blob/v0.33/change.txt Change Log]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
18835f3bec6cfd009d339f363881839bec07cd8f
255
252
2018-04-13T16:21:51Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating real/complex/dense/sparse matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.34''
|-
| '''Date:''' ''April 13, 2018''
|-
| [https://github.com/lessthanoptimal/ejml/blob/master/convert_to_ejml31.py Upgrade Script]
|-
| [https://github.com/lessthanoptimal/ejml/blob/v0.34/change.txt Change Log]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
| style="vertical-align:top;" |
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
dc05c7aec53a010e0df07f40cf130d0eb89f471f
Random matrices, Matrix Features, and Matrix Norms
0
25
226
72
2017-05-18T17:02:32Z
Peter
1
wikitext
text/x-wiki
__TOC__
== Random Matrices ==
Random matrices and vectors are used extensively in Monti Carlo methods, simulations, and testing. There are many different types of ways in which an matrix can be randomized. For example, each element can be independent variables or the rows/columns are independent orthogonal vectors. EJML provides built in methods for creating a variety types of random matrices.
Functions for creating random matrices are contained inside of the RandomMatrices_DDRM class. A partial list of types of random matrices it can create includes:
* Uniform distribution in each element.
* Uniform distribution along diagonal elements.
* Uniform distribution triangular.
* Symmetric from a uniform distribution.
* Random with fixed singular values.
* Random with fixed eigen values.
* Random orthogonal.
Creating a random matrix using the Procedual API is very simple as the code sample below shows:
<syntaxhighlight lang="java">
Random rand = new Random();
DMatrixRmaj A = RandomMatrices_DDRM.createSymmetric(20,-2,3,rand);
</syntaxhighlight>
This will create a random 20 by 20 matrix 'A' which is symmetric and has elements whose values range from -2 to 3.
Also easy to do using SimpleMatrix
<syntaxhighlight lang="java">
Random rand = new Random();
SimpleMatrix A = SimpleMatrix.random64(20,20,-2,3,rand);
</syntaxhighlight>
The 64 indicates that internally the SimpleMatrix will be of DMatrixRmaj type.
== Matrix Features ==
It is common to describe a matrix based on different features it might posses. A common example is a symmetric matrix whose elements have the following properties: a<sub>i,j</sub> == a<sub>j,i</sub>. Testing for certain features is often required at runtime to detect computational errors caused by bad inputs or round off errors.
MatrixFeatures contains a list of commonly used matrix features. In practice a matrix in a compute will almost never exactly match a feature's definition due to small round off errors. For this reason a tolerance parameter is almost always provided to test if a matrix has a feature or not. What a reasonable tolerance is is dependent on the applications.
Functions include:
* If two matrices are identical.
* If a matrix contains NaN or other uncountable numbers.
* If a matrix is symmetrix.
* If a matrix is positive definite.
* If a matrix is orthogonal.
* If a matrix is an identity matrix.
* If a matrix is the negative of another one.
* If a matrix is triangular.
* A matrix's rank and nullity.
* And several others...
Code Example:
<syntaxhighlight lang="java">
DMatrixRmaj A = new DMatrixRmaj(2,2);
A.set(0,1,2);
A.set(1,0,-2.0000000001);
if( MatrixFeatures_DDRM.isSkewSymmetric(A,1e-8) )
System.out.println("Is skew symmetric!");
else
System.out.println("Should be skew symmetric!");
</syntaxhighlight>
Note that even through it is not exactly skew symmetric it will be within tolerance.
== Matrix Norms ==
Norms are a measure of the size of a vector or a matrix. One typical application is in error analysis.
Vector norms have the following properties:
# |x| > 0 if x != 0 and |0|= 0
# |a*x| = |a| |x|
# |x+y| <= |x| + |y|
Matrix norms have the following properties:
# |A| > 0 if A != 0
# | a A | = |a| |A|
# |A+B| <= |A| + |B|
# |AB| <= |A| |B|
where A and B are m by n matrices. Note that the last item in the list only applies to square matrices.
In EJML norms are computed inside the NormOps class. For some norms it will provide a fast method of computing the norm. Typically this means that it is skipping some steps that ensure numerical stability over a wider range of inputs. In applications where the input matrices or vectors are known to be well behaved the fast functions can be used.
Code Example:
<syntaxhighlight lang="java">
double v = NormOps_DDRM.normF(A);
</syntaxhighlight>
which computes the Frobenius norm of 'A'.
8679553d4fc980b906cad0447103f6da636532fa
Input and Output
0
23
227
66
2017-05-18T17:07:47Z
Peter
1
wikitext
text/x-wiki
EJML provides several different methods for loading, saving, and displaying a matrix. A matrix can be saved and loaded from a file, displayed visually in a window, printed to the console, created from raw arrays or strings.
__TOC__
= Text Output =
A matrix can be printed to standard out using its built in ''print()'' command, this works for both DMatrixRMaj and SimpleMatrix. To create a custom output the user can provide a formatting string that is compatible with printf().
Code:
<syntaxhighlight lang="java">
public static void main( String []args ) {
DMatrixRMaj A = new DMatrixRMaj(2,3,true,1.1,2.34,3.35436,4345,59505,0.00001234);
A.print();
System.out.println();
A.print("%e");
System.out.println();
A.print("%10.2f");
}
</syntaxhighlight>
Output:
<pre>
Type = dense real , numRows = 2 , numCols = 3
1.100 2.340 3.354
4345.000 59505.000 0.000
Type = dense real , numRows = 2 , numCols = 3
1.100000e+00 2.340000e+00 3.354360e+00
4.345000e+03 5.950500e+04 1.234000e-05
Type = dense real , numRows = 2 , numCols = 3
1.10 2.34 3.35
4345.00 59505.00 0.00
</pre>
= CSV Input/Outut =
A Column Space Value (CSV) reader and writer is provided by EJML. The advantage of this file format is that it's human readable, the disadvantage is that its large and slow. Two CSV formats are supported, one where the first line specifies the matrix dimension and the other the user specifies it pro grammatically.
In the example below, the matrix size and type is specified in the first line; row, column, and real/complex. The remainder of the file contains the value of each element in the matrix in a row-major format. A file containing
<pre>
2 3 real
2.4 6.7 9
-2 3 5
</pre>
would describe a real matrix with 2 rows and 3 columns.
DenseMatrix64F Example:
<syntaxhighlight lang="java">
public static void main( String args[] ) {
DMatrixRMaj A = new DMatrixRMaj(2,3,true,new double[]{1,2,3,4,5,6});
try {
MatrixIO.saveCSV(A, "matrix_file.csv");
DenseMatrix64F B = MatrixIO.loadCSV("matrix_file.csv");
B.print();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
</syntaxhighlight>
SimpleMatrix Example:
<syntaxhighlight lang="java">
public static void main( String args[] ) {
SimpleMatrix A = new SimpleMatrix(2,3,true,new double[]{1,2,3,4,5,6});
try {
A.saveToFileCSV("matrix_file.csv");
SimpleMatrix B = SimpleMatrix.loadCSV("matrix_file.csv");
B.print();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
</syntaxhighlight>
= Serialized Binary Input/Output =
DMatrixRMaj is a serializable object and is fully compatible with any Java serialization routine. MatrixIO provides save() and load() functions for saving to and reading from a file. The matrix is saved as a Java binary serialized object. SimpleMatrix provides its own function (that are wrappers around MatrixIO) for saving and loading from files.
MatrixIO Example:
<syntaxhighlight lang="java">
public static void main( String args[] ) {
DMatrixRMaj A = new DMatrixRMaj(2,3,true,new double[]{1,2,3,4,5,6});
try {
MatrixIO.saveBin(A,"matrix_file.data");
DMatrixRMaj B = MatrixIO.loadBin("matrix_file.data");
B.print();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
</syntaxhighlight>
*NOTE* in v0.18 saveBin/loadBin is actually saveXML/loadXML, which is a mistake since its not in an xml format.
SimpleMatrix Example:
<syntaxhighlight lang="java">
public static void main( String args[] ) {
SimpleMatrix A = new SimpleMatrix(2,3,true,new double[]{1,2,3,4,5,6});
try {
A.saveToFileBinary("matrix_file.data");
SimpleMatrix B = SimpleMatrix.loadBinary("matrix_file.data");
B.print();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
</syntaxhighlight>
= Visual Display =
Understanding the state of a matrix from text output can be difficult, especially for large matrices. To help in these situations a visual way of viewing a matrix is provided in DMatrixVisualization. By calling MatrixIO.show() a window will be created that shows the matrix. Positive elements will appear as a shade of red, negative ones as a shade of blue, and zeros as black. How red or blue an element is depends on its magnitude.
Example Code:
<syntaxhighlight lang="java">
public static void main( String args[] ) {
DMatrixRMaj A = new DMatrixRMaj(4,4,true,
0,2,3,4,-2,0,2,3,-3,-2,0,2,-4,-3,-2,0);
MatrixIO.show(A,"Small Matrix");
DMatrixRMaj B = new DMatrixRMaj(25,50);
for( int i = 0; i < 25; i++ )
B.set(i,i,i+1);
MatrixIO.show(B,"Larger Diagonal Matrix");
}
</syntaxhighlight>
Output:
{|
| http://ejml.org/wiki/MY_IMAGES/small_matrix.gif || http://ejml.org/wiki/MY_IMAGES/larger_matrix.gif
|}
e0fbc46d0d072be6152c0ff45f1b5b0a562a8f13
228
227
2017-05-18T17:09:33Z
Peter
1
wikitext
text/x-wiki
EJML provides several different methods for loading, saving, and displaying a matrix. A matrix can be saved and loaded from a file, displayed visually in a window, printed to the console, created from raw arrays or strings.
__TOC__
= Text Output =
A matrix can be printed to standard out using its built in ''print()'' command, this works for both DMatrixRMaj and SimpleMatrix. To create a custom output the user can provide a formatting string that is compatible with printf().
Code:
<syntaxhighlight lang="java">
public static void main( String []args ) {
DMatrixRMaj A = new DMatrixRMaj(2,3,true,1.1,2.34,3.35436,4345,59505,0.00001234);
A.print();
System.out.println();
A.print("%e");
System.out.println();
A.print("%10.2f");
}
</syntaxhighlight>
Output:
<pre>
Type = dense real , numRows = 2 , numCols = 3
1.100 2.340 3.354
4345.000 59505.000 0.000
Type = dense real , numRows = 2 , numCols = 3
1.100000e+00 2.340000e+00 3.354360e+00
4.345000e+03 5.950500e+04 1.234000e-05
Type = dense real , numRows = 2 , numCols = 3
1.10 2.34 3.35
4345.00 59505.00 0.00
</pre>
= CSV Input/Outut =
A Column Space Value (CSV) reader and writer is provided by EJML. The advantage of this file format is that it's human readable, the disadvantage is that its large and slow. Two CSV formats are supported, one where the first line specifies the matrix dimension and the other the user specifies it pro grammatically.
In the example below, the matrix size and type is specified in the first line; row, column, and real/complex. The remainder of the file contains the value of each element in the matrix in a row-major format. A file containing
<pre>
2 3 real
2.4 6.7 9
-2 3 5
</pre>
would describe a real matrix with 2 rows and 3 columns.
DMatrixRMaj Example:
<syntaxhighlight lang="java">
public static void main( String args[] ) {
DMatrixRMaj A = new DMatrixRMaj(2,3,true,new double[]{1,2,3,4,5,6});
try {
MatrixIO.saveCSV(A, "matrix_file.csv");
DMatrixRMaj B = MatrixIO.loadCSV("matrix_file.csv");
B.print();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
</syntaxhighlight>
SimpleMatrix Example:
<syntaxhighlight lang="java">
public static void main( String args[] ) {
SimpleMatrix A = new SimpleMatrix(2,3,true,new double[]{1,2,3,4,5,6});
try {
A.saveToFileCSV("matrix_file.csv");
SimpleMatrix B = SimpleMatrix.loadCSV("matrix_file.csv");
B.print();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
</syntaxhighlight>
= Serialized Binary Input/Output =
DMatrixRMaj is a serializable object and is fully compatible with any Java serialization routine. MatrixIO provides save() and load() functions for saving to and reading from a file. The matrix is saved as a Java binary serialized object. SimpleMatrix provides its own function (that are wrappers around MatrixIO) for saving and loading from files.
MatrixIO Example:
<syntaxhighlight lang="java">
public static void main( String args[] ) {
DMatrixRMaj A = new DMatrixRMaj(2,3,true,new double[]{1,2,3,4,5,6});
try {
MatrixIO.saveBin(A,"matrix_file.data");
DMatrixRMaj B = MatrixIO.loadBin("matrix_file.data");
B.print();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
</syntaxhighlight>
*NOTE* in v0.18 saveBin/loadBin is actually saveXML/loadXML, which is a mistake since its not in an xml format.
SimpleMatrix Example:
<syntaxhighlight lang="java">
public static void main( String args[] ) {
SimpleMatrix A = new SimpleMatrix(2,3,true,new double[]{1,2,3,4,5,6});
try {
A.saveToFileBinary("matrix_file.data");
SimpleMatrix B = SimpleMatrix.loadBinary("matrix_file.data");
B.print();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
</syntaxhighlight>
= Visual Display =
Understanding the state of a matrix from text output can be difficult, especially for large matrices. To help in these situations a visual way of viewing a matrix is provided in DMatrixVisualization. By calling MatrixIO.show() a window will be created that shows the matrix. Positive elements will appear as a shade of red, negative ones as a shade of blue, and zeros as black. How red or blue an element is depends on its magnitude.
Example Code:
<syntaxhighlight lang="java">
public static void main( String args[] ) {
DMatrixRMaj A = new DMatrixRMaj(4,4,true,
0,2,3,4,-2,0,2,3,-3,-2,0,2,-4,-3,-2,0);
MatrixIO.show(A,"Small Matrix");
DMatrixRMaj B = new DMatrixRMaj(25,50);
for( int i = 0; i < 25; i++ )
B.set(i,i,i+1);
MatrixIO.show(B,"Larger Diagonal Matrix");
}
</syntaxhighlight>
Output:
{|
| http://ejml.org/wiki/MY_IMAGES/small_matrix.gif || http://ejml.org/wiki/MY_IMAGES/larger_matrix.gif
|}
6725cab5c7ff74c42c0d50431a5cb3d004b35938
Unit Testing
0
24
229
70
2017-05-18T17:28:55Z
Peter
1
wikitext
text/x-wiki
[http://en.wikipedia.org/wiki/Unit_testing Unit testing] is an essential part of modern software development that helps ensure correctness. EJML itself makes extensive use of unit tests as well as system level tests. EJML provides several functions are specifically designed for creating unit tests.
EjmlUnitTests and MatrixFeatures are two classes which contain useful functions for unit testing. EjmlUnitTests provides a similar interface to how JUnitTest operates. MatrixFeatures is primarily intended for extracting high level information about a matrix, but also contains several functions for testing if two matrices are equal or have specific characteristics.
The following is a brief introduction to unit testing with EJML. See the JavaDoc for a more detailed list of functions available in EjmlUnitTests and MatrixFeatures.
= Example with EjmlUnitTests =
EjmlUnitTests provides various functions for testing equality and matrix shape. Below is an example taken from an internal EJML unit test that compares the output from two different matrix decompositions with different matrix types:
<syntaxhighlight lang="java">
DMatrixRMaj Q = decomp.getQ(null);
DMatrixRBlock Qb = decompB.getQ(null,false);
EjmlUnitTests.assertEquals(Q,Qb,UtilEjml.TEST_F64);
</syntaxhighlight>
In this example it checks to see if each element of the two matrices are within 1e-8 of each other. The reference EjmlUnitTests to can be avoided by invoking a "static import". If an error is found and the test fails the exact element it failed at is printed.
To maintain compatibility with different unit test libraries a generic runtime exception is thrown if a test fails.
= Example using MatrixFeatures =
MatrixFeatures is not designed with unit testing in mind, but provides many useful functions for unit tests. For example, to test for equality between two matrices:
<syntaxhighlight lang="java">
assertTrue(MatrixFeatures.isEquals(Q,Qb,UtilEjml.TEST_F64));
</syntaxhighlight>
Here the JUnitTest function assertTrue() has been used. MatrixFeatures.isEquals() returns true of the two matrices are within tolerance of each other. If the test fails it doesn't print any additional information, such as which element it failed at.
One advantage of MatrixFeatures is it provides support for many more specialized tests. For example if you want to know if a matrix is orthogonal call MatrixFeatures.isOrthogonal() or to test for symmetry call MatrixFeatures.isSymmetric().
b4658c4a0db60030e39c8a9b9bc049a0ad44c7f0
230
229
2017-05-18T17:30:02Z
Peter
1
wikitext
text/x-wiki
[http://en.wikipedia.org/wiki/Unit_testing Unit testing] is an essential part of modern software development that helps ensure correctness. EJML itself makes extensive use of unit tests as well as system level tests. EJML provides several functions are specifically designed for creating unit tests.
EjmlUnitTests and MatrixFeatures are two classes which contain useful functions for unit testing. EjmlUnitTests provides a similar interface to how JUnitTest operates. MatrixFeatures is primarily intended for extracting high level information about a matrix, but also contains several functions for testing if two matrices are equal or have specific characteristics.
The following is a brief introduction to unit testing with EJML. See the JavaDoc for a more detailed list of functions available in EjmlUnitTests and MatrixFeatures.
= Example with EjmlUnitTests =
EjmlUnitTests provides various functions for testing equality and matrix shape. Below is an example taken from an internal EJML unit test that compares the output from two different matrix decompositions with different matrix types:
<syntaxhighlight lang="java">
DMatrixRMaj Q = decomp.getQ(null);
DMatrixRBlock Qb = decompB.getQ(null,false);
EjmlUnitTests.assertEquals(Q,Qb,UtilEjml.TEST_F64);
</syntaxhighlight>
In this example it checks to see if each element of the two matrices are within 1e-8 of each other. The reference EjmlUnitTests to can be avoided by invoking a "static import". If an error is found and the test fails the exact element it failed at is printed.
To maintain compatibility with different unit test libraries a generic runtime exception is thrown if a test fails.
= Example using MatrixFeatures =
MatrixFeatures is not designed with unit testing in mind, but provides many useful functions for unit tests. For example, to test for equality between two matrices:
<syntaxhighlight lang="java">
assertTrue(MatrixFeatures_DDRM.isEquals(Q,Qb,UtilEjml.TEST_F64));
</syntaxhighlight>
Here the JUnitTest function assertTrue() has been used. MatrixFeatures_DDRM.isEquals() returns true of the two matrices are within tolerance of each other. If the test fails it doesn't print any additional information, such as which element it failed at.
One advantage of MatrixFeatures_DDRM is it provides support for many more specialized tests. For example if you want to know if a matrix is orthogonal call MatrixFeatures_DDRM.isOrthogonal() or to test for symmetry call MatrixFeatures_DDRM.isSymmetric().
5a0f69e9fc84c4c978861a4194c4de87b801aba5
Example Kalman Filter
0
10
232
133
2017-05-18T19:35:36Z
Peter
1
/* SimpleMatrix Example */
wikitext
text/x-wiki
Here are three examples that demonstrate how a [http://en.wikipedia.org/wiki/Kalman_filter Kalman filter] can be created using different API's in EJML. Each API has different advantages and disadvantages. High level interfaces tend to be easier to use, but sacrifice efficiency. The intent of this article is to illustrate this trend empirically. Runtime performance of each approach is shown below. To see how complex and readable each approach is check out the source code below.
<center>
{| class="wikitable"
! API !! Execution Time (ms)
|-
| SimpleMatrix || 1875
|-
| Procedural || 1280
|-
| Equations || 1698
|}
</center>
__TOC__
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/KalmanFilterSimple.java KalmanFilterSimple]
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/KalmanFilterOperations.java KalmanFilterProcedural]
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/KalmanFilterEquation.java KalmanFilterEquation]
* <disqus>Discuss this example</disqus>
----
'''NOTE:''' While the Kalman filter code below is fully functional and will work well in most applications, it might not be the best. Other variants seek to improve stability and/or avoid the matrix inversion. It's worth point out that some people say you should never invert the matrix in a Kalman filter. There are applications, such as target tracking, where matrix inversion of the innovation covariance is helpful as a preprocessing step.
== SimpleMatrix Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter implemented using SimpleMatrix. The code tends to be easier to
* read and write, but the performance is degraded due to excessive creation/destruction of
* memory and the use of more generic algorithms. This also demonstrates how code can be
* seamlessly implemented using both SimpleMatrix and DMatrixRMaj. This allows code
* to be quickly prototyped or to be written either by novices or experts.
*
* @author Peter Abeles
*/
public class KalmanFilterSimple implements KalmanFilter{
// kinematics description
private SimpleMatrix F,Q,H;
// sytem state estimate
private SimpleMatrix x,P;
@Override
public void configure(DMatrixRMaj F, DMatrixRMaj Q, DMatrixRMaj H) {
this.F = new SimpleMatrix(F);
this.Q = new SimpleMatrix(Q);
this.H = new SimpleMatrix(H);
}
@Override
public void setState(DMatrixRMaj x, DMatrixRMaj P) {
this.x = new SimpleMatrix(x);
this.P = new SimpleMatrix(P);
}
@Override
public void predict() {
// x = F x
x = F.mult(x);
// P = F P F' + Q
P = F.mult(P).mult(F.transpose()).plus(Q);
}
@Override
public void update(DMatrixRMaj _z, DMatrixRMaj _R) {
// a fast way to make the matrices usable by SimpleMatrix
SimpleMatrix z = SimpleMatrix.wrap(_z);
SimpleMatrix R = SimpleMatrix.wrap(_R);
// y = z - H x
SimpleMatrix y = z.minus(H.mult(x));
// S = H P H' + R
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
// K = PH'S^(-1)
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
// x = x + Ky
x = x.plus(K.mult(y));
// P = (I-kH)P = P - KHP
P = P.minus(K.mult(H).mult(P));
}
@Override
public DMatrixRMaj getState() {
return x.getMatrix();
}
@Override
public DMatrixRMaj getCovariance() {
return P.getMatrix();
}
}
</syntaxhighlight>
== Procedural Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter that is implemented using the operations API, which is procedural. Much of the excessive
* memory creation/destruction has been reduced from the KalmanFilterSimple. A specialized solver is
* under to invert the SPD matrix.
*
* @author Peter Abeles
*/
public class KalmanFilterOperations implements KalmanFilter{
// kinematics description
private DenseMatrix64F F,Q,H;
// system state estimate
private DenseMatrix64F x,P;
// these are predeclared for efficiency reasons
private DenseMatrix64F a,b;
private DenseMatrix64F y,S,S_inv,c,d;
private DenseMatrix64F K;
private LinearSolver<DenseMatrix64F> solver;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
this.F = F;
this.Q = Q;
this.H = H;
int dimenX = F.numCols;
int dimenZ = H.numRows;
a = new DenseMatrix64F(dimenX,1);
b = new DenseMatrix64F(dimenX,dimenX);
y = new DenseMatrix64F(dimenZ,1);
S = new DenseMatrix64F(dimenZ,dimenZ);
S_inv = new DenseMatrix64F(dimenZ,dimenZ);
c = new DenseMatrix64F(dimenZ,dimenX);
d = new DenseMatrix64F(dimenX,dimenZ);
K = new DenseMatrix64F(dimenX,dimenZ);
x = new DenseMatrix64F(dimenX,1);
P = new DenseMatrix64F(dimenX,dimenX);
// covariance matrices are symmetric positive semi-definite
solver = LinearSolverFactory.symmPosDef(dimenX);
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x.set(x);
this.P.set(P);
}
@Override
public void predict() {
// x = F x
mult(F,x,a);
x.set(a);
// P = F P F' + Q
mult(F,P,b);
multTransB(b,F, P);
addEquals(P,Q);
}
@Override
public void update(DenseMatrix64F z, DenseMatrix64F R) {
// y = z - H x
mult(H,x,y);
subtract(z, y, y);
// S = H P H' + R
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
// K = PH'S^(-1)
if( !solver.setA(S) ) throw new RuntimeException("Invert failed");
solver.invert(S_inv);
multTransA(H,S_inv,d);
mult(P,d,K);
// x = x + Ky
mult(K,y,a);
addEquals(x,a);
// P = (I-kH)P = P - (KH)P = P-K(HP)
mult(H,P,c);
mult(K,c,b);
subtractEquals(P, b);
}
@Override
public DenseMatrix64F getState() {
return x;
}
@Override
public DenseMatrix64F getCovariance() {
return P;
}
}
</syntaxhighlight>
== Equations Example ==
<syntaxhighlight lang="java">
/**
* Example of how the equation interface can greatly simplify code
*
* @author Peter Abeles
*/
public class KalmanFilterEquation implements KalmanFilter{
// system state estimate
private DenseMatrix64F x,P;
private Equation eq;
// Storage for precompiled code for predict and update
Sequence predictX,predictP;
Sequence updateY,updateK,updateX,updateP;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
int dimenX = F.numCols;
x = new DenseMatrix64F(dimenX,1);
P = new DenseMatrix64F(dimenX,dimenX);
eq = new Equation();
// Provide aliases between the symbolic variables and matrices we normally interact with
// The names do not have to be the same.
eq.alias(x,"x",P,"P",Q,"Q",F,"F",H,"H");
// Dummy matrix place holder to avoid compiler errors. Will be replaced later on
eq.alias(new DenseMatrix64F(1,1),"z");
eq.alias(new DenseMatrix64F(1,1),"R");
// Pre-compile so that it doesn't have to compile it each time it's invoked. More cumbersome
// but for small matrices the overhead is significant
predictX = eq.compile("x = F*x");
predictP = eq.compile("P = F*P*F' + Q");
updateY = eq.compile("y = z - H*x");
updateK = eq.compile("K = P*H'*inv( H*P*H' + R )");
updateX = eq.compile("x = x + K*y");
updateP = eq.compile("P = P-K*(H*P)");
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x.set(x);
this.P.set(P);
}
@Override
public void predict() {
predictX.perform();
predictP.perform();
}
@Override
public void update(DenseMatrix64F z, DenseMatrix64F R) {
// Alias will overwrite the reference to the previous matrices with the same name
eq.alias(z,"z"); eq.alias(R,"R");
updateY.perform();
updateK.perform();
updateX.perform();
updateP.perform();
}
@Override
public DenseMatrix64F getState() {
return x;
}
@Override
public DenseMatrix64F getCovariance() {
return P;
}
}
</syntaxhighlight>
bda493c6ca062c808465078ca3b1402e9a27a4fd
233
232
2017-05-18T19:36:07Z
Peter
1
/* Procedural Example */
wikitext
text/x-wiki
Here are three examples that demonstrate how a [http://en.wikipedia.org/wiki/Kalman_filter Kalman filter] can be created using different API's in EJML. Each API has different advantages and disadvantages. High level interfaces tend to be easier to use, but sacrifice efficiency. The intent of this article is to illustrate this trend empirically. Runtime performance of each approach is shown below. To see how complex and readable each approach is check out the source code below.
<center>
{| class="wikitable"
! API !! Execution Time (ms)
|-
| SimpleMatrix || 1875
|-
| Procedural || 1280
|-
| Equations || 1698
|}
</center>
__TOC__
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/KalmanFilterSimple.java KalmanFilterSimple]
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/KalmanFilterOperations.java KalmanFilterProcedural]
* [https://github.com/lessthanoptimal/ejml/blob/v0.27/examples/src/org/ejml/example/KalmanFilterEquation.java KalmanFilterEquation]
* <disqus>Discuss this example</disqus>
----
'''NOTE:''' While the Kalman filter code below is fully functional and will work well in most applications, it might not be the best. Other variants seek to improve stability and/or avoid the matrix inversion. It's worth point out that some people say you should never invert the matrix in a Kalman filter. There are applications, such as target tracking, where matrix inversion of the innovation covariance is helpful as a preprocessing step.
== SimpleMatrix Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter implemented using SimpleMatrix. The code tends to be easier to
* read and write, but the performance is degraded due to excessive creation/destruction of
* memory and the use of more generic algorithms. This also demonstrates how code can be
* seamlessly implemented using both SimpleMatrix and DMatrixRMaj. This allows code
* to be quickly prototyped or to be written either by novices or experts.
*
* @author Peter Abeles
*/
public class KalmanFilterSimple implements KalmanFilter{
// kinematics description
private SimpleMatrix F,Q,H;
// sytem state estimate
private SimpleMatrix x,P;
@Override
public void configure(DMatrixRMaj F, DMatrixRMaj Q, DMatrixRMaj H) {
this.F = new SimpleMatrix(F);
this.Q = new SimpleMatrix(Q);
this.H = new SimpleMatrix(H);
}
@Override
public void setState(DMatrixRMaj x, DMatrixRMaj P) {
this.x = new SimpleMatrix(x);
this.P = new SimpleMatrix(P);
}
@Override
public void predict() {
// x = F x
x = F.mult(x);
// P = F P F' + Q
P = F.mult(P).mult(F.transpose()).plus(Q);
}
@Override
public void update(DMatrixRMaj _z, DMatrixRMaj _R) {
// a fast way to make the matrices usable by SimpleMatrix
SimpleMatrix z = SimpleMatrix.wrap(_z);
SimpleMatrix R = SimpleMatrix.wrap(_R);
// y = z - H x
SimpleMatrix y = z.minus(H.mult(x));
// S = H P H' + R
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
// K = PH'S^(-1)
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
// x = x + Ky
x = x.plus(K.mult(y));
// P = (I-kH)P = P - KHP
P = P.minus(K.mult(H).mult(P));
}
@Override
public DMatrixRMaj getState() {
return x.getMatrix();
}
@Override
public DMatrixRMaj getCovariance() {
return P.getMatrix();
}
}
</syntaxhighlight>
== Operations Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter that is implemented using the operations API, which is procedural. Much of the excessive
* memory creation/destruction has been reduced from the KalmanFilterSimple. A specialized solver is
* under to invert the SPD matrix.
*
* @author Peter Abeles
*/
public class KalmanFilterOperations implements KalmanFilter{
// kinematics description
private DMatrixRMaj F,Q,H;
// system state estimate
private DMatrixRMaj x,P;
// these are predeclared for efficiency reasons
private DMatrixRMaj a,b;
private DMatrixRMaj y,S,S_inv,c,d;
private DMatrixRMaj K;
private LinearSolver<DMatrixRMaj> solver;
@Override
public void configure(DMatrixRMaj F, DMatrixRMaj Q, DMatrixRMaj H) {
this.F = F;
this.Q = Q;
this.H = H;
int dimenX = F.numCols;
int dimenZ = H.numRows;
a = new DMatrixRMaj(dimenX,1);
b = new DMatrixRMaj(dimenX,dimenX);
y = new DMatrixRMaj(dimenZ,1);
S = new DMatrixRMaj(dimenZ,dimenZ);
S_inv = new DMatrixRMaj(dimenZ,dimenZ);
c = new DMatrixRMaj(dimenZ,dimenX);
d = new DMatrixRMaj(dimenX,dimenZ);
K = new DMatrixRMaj(dimenX,dimenZ);
x = new DMatrixRMaj(dimenX,1);
P = new DMatrixRMaj(dimenX,dimenX);
// covariance matrices are symmetric positive semi-definite
solver = LinearSolverFactory_DDRM.symmPosDef(dimenX);
}
@Override
public void setState(DMatrixRMaj x, DMatrixRMaj P) {
this.x.set(x);
this.P.set(P);
}
@Override
public void predict() {
// x = F x
mult(F,x,a);
x.set(a);
// P = F P F' + Q
mult(F,P,b);
multTransB(b,F, P);
addEquals(P,Q);
}
@Override
public void update(DMatrixRMaj z, DMatrixRMaj R) {
// y = z - H x
mult(H,x,y);
subtract(z, y, y);
// S = H P H' + R
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
// K = PH'S^(-1)
if( !solver.setA(S) ) throw new RuntimeException("Invert failed");
solver.invert(S_inv);
multTransA(H,S_inv,d);
mult(P,d,K);
// x = x + Ky
mult(K,y,a);
addEquals(x,a);
// P = (I-kH)P = P - (KH)P = P-K(HP)
mult(H,P,c);
mult(K,c,b);
subtractEquals(P, b);
}
@Override
public DMatrixRMaj getState() {
return x;
}
@Override
public DMatrixRMaj getCovariance() {
return P;
}
}
</syntaxhighlight>
== Equations Example ==
<syntaxhighlight lang="java">
/**
* Example of how the equation interface can greatly simplify code
*
* @author Peter Abeles
*/
public class KalmanFilterEquation implements KalmanFilter{
// system state estimate
private DenseMatrix64F x,P;
private Equation eq;
// Storage for precompiled code for predict and update
Sequence predictX,predictP;
Sequence updateY,updateK,updateX,updateP;
@Override
public void configure(DenseMatrix64F F, DenseMatrix64F Q, DenseMatrix64F H) {
int dimenX = F.numCols;
x = new DenseMatrix64F(dimenX,1);
P = new DenseMatrix64F(dimenX,dimenX);
eq = new Equation();
// Provide aliases between the symbolic variables and matrices we normally interact with
// The names do not have to be the same.
eq.alias(x,"x",P,"P",Q,"Q",F,"F",H,"H");
// Dummy matrix place holder to avoid compiler errors. Will be replaced later on
eq.alias(new DenseMatrix64F(1,1),"z");
eq.alias(new DenseMatrix64F(1,1),"R");
// Pre-compile so that it doesn't have to compile it each time it's invoked. More cumbersome
// but for small matrices the overhead is significant
predictX = eq.compile("x = F*x");
predictP = eq.compile("P = F*P*F' + Q");
updateY = eq.compile("y = z - H*x");
updateK = eq.compile("K = P*H'*inv( H*P*H' + R )");
updateX = eq.compile("x = x + K*y");
updateP = eq.compile("P = P-K*(H*P)");
}
@Override
public void setState(DenseMatrix64F x, DenseMatrix64F P) {
this.x.set(x);
this.P.set(P);
}
@Override
public void predict() {
predictX.perform();
predictP.perform();
}
@Override
public void update(DenseMatrix64F z, DenseMatrix64F R) {
// Alias will overwrite the reference to the previous matrices with the same name
eq.alias(z,"z"); eq.alias(R,"R");
updateY.perform();
updateK.perform();
updateX.perform();
updateP.perform();
}
@Override
public DenseMatrix64F getState() {
return x;
}
@Override
public DenseMatrix64F getCovariance() {
return P;
}
}
</syntaxhighlight>
cb523be36734aa48729f3047f69ae05ce65a3429
234
233
2017-05-18T19:37:20Z
Peter
1
wikitext
text/x-wiki
Here are three examples that demonstrate how a [http://en.wikipedia.org/wiki/Kalman_filter Kalman filter] can be created using different API's in EJML. Each API has different advantages and disadvantages. High level interfaces tend to be easier to use, but sacrifice efficiency. The intent of this article is to illustrate this trend empirically. Runtime performance of each approach is shown below. To see how complex and readable each approach is check out the source code below.
<center>
{| class="wikitable"
! API !! Execution Time (ms)
|-
| SimpleMatrix || 1875
|-
| Operations || 1280
|-
| Equations || 1698
|}
</center>
__TOC__
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.31/examples/src/org/ejml/example/KalmanFilterSimple.java KalmanFilterSimple]
* [https://github.com/lessthanoptimal/ejml/blob/v0.31/examples/src/org/ejml/example/KalmanFilterOperations.java KalmanFilterOperations]
* [https://github.com/lessthanoptimal/ejml/blob/v0.31/examples/src/org/ejml/example/KalmanFilterEquation.java KalmanFilterEquation]
* <disqus>Discuss this example</disqus>
----
'''NOTE:''' While the Kalman filter code below is fully functional and will work well in most applications, it might not be the best. Other variants seek to improve stability and/or avoid the matrix inversion. It's worth point out that some people say you should never invert the matrix in a Kalman filter. There are applications, such as target tracking, where matrix inversion of the innovation covariance is helpful as a preprocessing step.
== SimpleMatrix Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter implemented using SimpleMatrix. The code tends to be easier to
* read and write, but the performance is degraded due to excessive creation/destruction of
* memory and the use of more generic algorithms. This also demonstrates how code can be
* seamlessly implemented using both SimpleMatrix and DMatrixRMaj. This allows code
* to be quickly prototyped or to be written either by novices or experts.
*
* @author Peter Abeles
*/
public class KalmanFilterSimple implements KalmanFilter{
// kinematics description
private SimpleMatrix F,Q,H;
// sytem state estimate
private SimpleMatrix x,P;
@Override
public void configure(DMatrixRMaj F, DMatrixRMaj Q, DMatrixRMaj H) {
this.F = new SimpleMatrix(F);
this.Q = new SimpleMatrix(Q);
this.H = new SimpleMatrix(H);
}
@Override
public void setState(DMatrixRMaj x, DMatrixRMaj P) {
this.x = new SimpleMatrix(x);
this.P = new SimpleMatrix(P);
}
@Override
public void predict() {
// x = F x
x = F.mult(x);
// P = F P F' + Q
P = F.mult(P).mult(F.transpose()).plus(Q);
}
@Override
public void update(DMatrixRMaj _z, DMatrixRMaj _R) {
// a fast way to make the matrices usable by SimpleMatrix
SimpleMatrix z = SimpleMatrix.wrap(_z);
SimpleMatrix R = SimpleMatrix.wrap(_R);
// y = z - H x
SimpleMatrix y = z.minus(H.mult(x));
// S = H P H' + R
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
// K = PH'S^(-1)
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
// x = x + Ky
x = x.plus(K.mult(y));
// P = (I-kH)P = P - KHP
P = P.minus(K.mult(H).mult(P));
}
@Override
public DMatrixRMaj getState() {
return x.getMatrix();
}
@Override
public DMatrixRMaj getCovariance() {
return P.getMatrix();
}
}
</syntaxhighlight>
== Operations Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter that is implemented using the operations API, which is procedural. Much of the excessive
* memory creation/destruction has been reduced from the KalmanFilterSimple. A specialized solver is
* under to invert the SPD matrix.
*
* @author Peter Abeles
*/
public class KalmanFilterOperations implements KalmanFilter{
// kinematics description
private DMatrixRMaj F,Q,H;
// system state estimate
private DMatrixRMaj x,P;
// these are predeclared for efficiency reasons
private DMatrixRMaj a,b;
private DMatrixRMaj y,S,S_inv,c,d;
private DMatrixRMaj K;
private LinearSolver<DMatrixRMaj> solver;
@Override
public void configure(DMatrixRMaj F, DMatrixRMaj Q, DMatrixRMaj H) {
this.F = F;
this.Q = Q;
this.H = H;
int dimenX = F.numCols;
int dimenZ = H.numRows;
a = new DMatrixRMaj(dimenX,1);
b = new DMatrixRMaj(dimenX,dimenX);
y = new DMatrixRMaj(dimenZ,1);
S = new DMatrixRMaj(dimenZ,dimenZ);
S_inv = new DMatrixRMaj(dimenZ,dimenZ);
c = new DMatrixRMaj(dimenZ,dimenX);
d = new DMatrixRMaj(dimenX,dimenZ);
K = new DMatrixRMaj(dimenX,dimenZ);
x = new DMatrixRMaj(dimenX,1);
P = new DMatrixRMaj(dimenX,dimenX);
// covariance matrices are symmetric positive semi-definite
solver = LinearSolverFactory_DDRM.symmPosDef(dimenX);
}
@Override
public void setState(DMatrixRMaj x, DMatrixRMaj P) {
this.x.set(x);
this.P.set(P);
}
@Override
public void predict() {
// x = F x
mult(F,x,a);
x.set(a);
// P = F P F' + Q
mult(F,P,b);
multTransB(b,F, P);
addEquals(P,Q);
}
@Override
public void update(DMatrixRMaj z, DMatrixRMaj R) {
// y = z - H x
mult(H,x,y);
subtract(z, y, y);
// S = H P H' + R
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
// K = PH'S^(-1)
if( !solver.setA(S) ) throw new RuntimeException("Invert failed");
solver.invert(S_inv);
multTransA(H,S_inv,d);
mult(P,d,K);
// x = x + Ky
mult(K,y,a);
addEquals(x,a);
// P = (I-kH)P = P - (KH)P = P-K(HP)
mult(H,P,c);
mult(K,c,b);
subtractEquals(P, b);
}
@Override
public DMatrixRMaj getState() {
return x;
}
@Override
public DMatrixRMaj getCovariance() {
return P;
}
}
</syntaxhighlight>
== Equations Example ==
<syntaxhighlight lang="java">
/**
* Example of how the equation interface can greatly simplify code
*
* @author Peter Abeles
*/
public class KalmanFilterEquation implements KalmanFilter{
// system state estimate
private DMatrixRMaj x,P;
private Equation eq;
// Storage for precompiled code for predict and update
Sequence predictX,predictP;
Sequence updateY,updateK,updateX,updateP;
@Override
public void configure(DMatrixRMaj F, DMatrixRMaj Q, DMatrixRMaj H) {
int dimenX = F.numCols;
x = new DMatrixRMaj(dimenX,1);
P = new DMatrixRMaj(dimenX,dimenX);
eq = new Equation();
// Provide aliases between the symbolic variables and matrices we normally interact with
// The names do not have to be the same.
eq.alias(x,"x",P,"P",Q,"Q",F,"F",H,"H");
// Dummy matrix place holder to avoid compiler errors. Will be replaced later on
eq.alias(new DMatrixRMaj(1,1),"z");
eq.alias(new DMatrixRMaj(1,1),"R");
// Pre-compile so that it doesn't have to compile it each time it's invoked. More cumbersome
// but for small matrices the overhead is significant
predictX = eq.compile("x = F*x");
predictP = eq.compile("P = F*P*F' + Q");
updateY = eq.compile("y = z - H*x");
updateK = eq.compile("K = P*H'*inv( H*P*H' + R )");
updateX = eq.compile("x = x + K*y");
updateP = eq.compile("P = P-K*(H*P)");
}
@Override
public void setState(DMatrixRMaj x, DMatrixRMaj P) {
this.x.set(x);
this.P.set(P);
}
@Override
public void predict() {
predictX.perform();
predictP.perform();
}
@Override
public void update(DMatrixRMaj z, DMatrixRMaj R) {
// Alias will overwrite the reference to the previous matrices with the same name
eq.alias(z,"z"); eq.alias(R,"R");
updateY.perform();
updateK.perform();
updateX.perform();
updateP.perform();
}
@Override
public DMatrixRMaj getState() {
return x;
}
@Override
public DMatrixRMaj getCovariance() {
return P;
}
}
</syntaxhighlight>
d30b395d7d7b7da77134830bd5ed8e30b24544f5
Example Levenberg-Marquardt
0
12
235
125
2017-05-18T19:38:44Z
Peter
1
wikitext
text/x-wiki
Levenberg-Marquardt is a popular non-linear optimization algorithm. This example demonstrate how a basic implementation of Levenberg-Marquardt can be created using EJML's [[Procedural|procedural]] interface. Unnecessary allocation of new memory is avoided by reshaping matrices. When a matrix is reshaped its width and height is changed but new memory is not declared unless the new shape requires more memory than is available.
The algorithm is provided a function, set of inputs, set of outputs, and an initial estimate of the parameters (this often works with all zeros). It finds the parameters that minimize the difference between the computed output and the observed output. A numerical Jacobian is used to estimate the function's gradient.
'''Note:''' This is a simple straight forward implementation of Levenberg-Marquardt and is not as robust as Minpack's implementation. If you are looking for a robust non-linear least-squares minimization library in Java check out [http://ddogleg.org DDogleg].
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.31/examples/src/org/ejml/example/LevenbergMarquardt.java LevenbergMarquardt.java code]
* <disqus>Discuss this example</disqus>
== Example Code ==
<syntaxhighlight lang="java">
/**
* <p>
* This is a straight forward implementation of the Levenberg-Marquardt (LM) algorithm. LM is used to minimize
* non-linear cost functions:<br>
* <br>
* S(P) = Sum{ i=1:m , [y<sub>i</sub> - f(x<sub>i</sub>,P)]<sup>2</sup>}<br>
* <br>
* where P is the set of parameters being optimized.
* </p>
*
* <p>
* In each iteration the parameters are updated using the following equations:<br>
* <br>
* P<sub>i+1</sub> = (H + λ I)<sup>-1</sup> d <br>
* d = (1/N) Sum{ i=1..N , (f(x<sub>i</sub>;P<sub>i</sub>) - y<sub>i</sub>) * jacobian(:,i) } <br>
* H = (1/N) Sum{ i=1..N , jacobian(:,i) * jacobian(:,i)<sup>T</sup> }
* </p>
* <p>
* Whenever possible the allocation of new memory is avoided. This is accomplished by reshaping matrices.
* A matrix that is reshaped won't grow unless the new shape requires more memory than it has available.
* </p>
* @author Peter Abeles
*/
public class LevenbergMarquardt {
// how much the numerical jacobian calculation perturbs the parameters by.
// In better implementation there are better ways to compute this delta. See Numerical Recipes.
private final static double DELTA = 1e-8;
private double initialLambda;
// the function that is optimized
private Function func;
// the optimized parameters and associated costs
private DMatrixRMaj param;
private double initialCost;
private double finalCost;
// used by matrix operations
private DMatrixRMaj d;
private DMatrixRMaj H;
private DMatrixRMaj negDelta;
private DMatrixRMaj tempParam;
private DMatrixRMaj A;
// variables used by the numerical jacobian algorithm
private DMatrixRMaj temp0;
private DMatrixRMaj temp1;
// used when computing d and H variables
private DMatrixRMaj tempDH;
// Where the numerical Jacobian is stored.
private DMatrixRMaj jacobian;
/**
* Creates a new instance that uses the provided cost function.
*
* @param funcCost Cost function that is being optimized.
*/
public LevenbergMarquardt( Function funcCost )
{
this.initialLambda = 1;
// declare data to some initial small size. It will grow later on as needed.
int maxElements = 1;
int numParam = 1;
this.temp0 = new DMatrixRMaj(maxElements,1);
this.temp1 = new DMatrixRMaj(maxElements,1);
this.tempDH = new DMatrixRMaj(maxElements,1);
this.jacobian = new DMatrixRMaj(numParam,maxElements);
this.func = funcCost;
this.param = new DMatrixRMaj(numParam,1);
this.d = new DMatrixRMaj(numParam,1);
this.H = new DMatrixRMaj(numParam,numParam);
this.negDelta = new DMatrixRMaj(numParam,1);
this.tempParam = new DMatrixRMaj(numParam,1);
this.A = new DMatrixRMaj(numParam,numParam);
}
public double getInitialCost() {
return initialCost;
}
public double getFinalCost() {
return finalCost;
}
public DMatrixRMaj getParameters() {
return param;
}
/**
* Finds the best fit parameters.
*
* @param initParam The initial set of parameters for the function.
* @param X The inputs to the function.
* @param Y The "observed" output of the function
* @return true if it succeeded and false if it did not.
*/
public boolean optimize( DMatrixRMaj initParam ,
DMatrixRMaj X ,
DMatrixRMaj Y )
{
configure(initParam,X,Y);
// save the cost of the initial parameters so that it knows if it improves or not
initialCost = cost(param,X,Y);
// iterate until the difference between the costs is insignificant
// or it iterates too many times
if( !adjustParam(X, Y, initialCost) ) {
finalCost = Double.NaN;
return false;
}
return true;
}
/**
* Iterate until the difference between the costs is insignificant
* or it iterates too many times
*/
private boolean adjustParam(DMatrixRMaj X, DMatrixRMaj Y,
double prevCost) {
// lambda adjusts how big of a step it takes
double lambda = initialLambda;
// the difference between the current and previous cost
double difference = 1000;
for( int iter = 0; iter < 20 || difference < 1e-6 ; iter++ ) {
// compute some variables based on the gradient
computeDandH(param,X,Y);
// try various step sizes and see if any of them improve the
// results over what has already been done
boolean foundBetter = false;
for( int i = 0; i < 5; i++ ) {
computeA(A,H,lambda);
if( !solve(A,d,negDelta) ) {
return false;
}
// compute the candidate parameters
subtract(param, negDelta, tempParam);
double cost = cost(tempParam,X,Y);
if( cost < prevCost ) {
// the candidate parameters produced better results so use it
foundBetter = true;
param.set(tempParam);
difference = prevCost - cost;
prevCost = cost;
lambda /= 10.0;
} else {
lambda *= 10.0;
}
}
// it reached a point where it can't improve so exit
if( !foundBetter )
break;
}
finalCost = prevCost;
return true;
}
/**
* Performs sanity checks on the input data and reshapes internal matrices. By reshaping
* a matrix it will only declare new memory when needed.
*/
protected void configure(DMatrixRMaj initParam , DMatrixRMaj X , DMatrixRMaj Y )
{
if( Y.getNumRows() != X.getNumRows() ) {
throw new IllegalArgumentException("Different vector lengths");
} else if( Y.getNumCols() != 1 || X.getNumCols() != 1 ) {
throw new IllegalArgumentException("Inputs must be a column vector");
}
int numParam = initParam.getNumElements();
int numPoints = Y.getNumRows();
if( param.getNumElements() != initParam.getNumElements() ) {
// reshaping a matrix means that new memory is only declared when needed
this.param.reshape(numParam,1, false);
this.d.reshape(numParam,1, false);
this.H.reshape(numParam,numParam, false);
this.negDelta.reshape(numParam,1, false);
this.tempParam.reshape(numParam,1, false);
this.A.reshape(numParam,numParam, false);
}
param.set(initParam);
// reshaping a matrix means that new memory is only declared when needed
temp0.reshape(numPoints,1, false);
temp1.reshape(numPoints,1, false);
tempDH.reshape(numPoints,1, false);
jacobian.reshape(numParam,numPoints, false);
}
/**
* Computes the d and H parameters. Where d is the average error gradient and
* H is an approximation of the hessian.
*/
private void computeDandH(DMatrixRMaj param , DMatrixRMaj x , DMatrixRMaj y )
{
func.compute(param,x, tempDH);
subtractEquals(tempDH, y);
computeNumericalJacobian(param,x,jacobian);
int numParam = param.getNumElements();
int length = x.getNumElements();
// d = average{ (f(x_i;p) - y_i) * jacobian(:,i) }
for( int i = 0; i < numParam; i++ ) {
double total = 0;
for( int j = 0; j < length; j++ ) {
total += tempDH.get(j,0)*jacobian.get(i,j);
}
d.set(i,0,total/length);
}
// compute the approximation of the hessian
multTransB(jacobian,jacobian,H);
scale(1.0/length,H);
}
/**
* A = H + lambda*I <br>
* <br>
* where I is an identity matrix.
*/
private void computeA(DMatrixRMaj A , DMatrixRMaj H , double lambda )
{
final int numParam = param.getNumElements();
A.set(H);
for( int i = 0; i < numParam; i++ ) {
A.set(i,i, A.get(i,i) + lambda);
}
}
/**
* Computes the "cost" for the parameters given.
*
* cost = (1/N) Sum (f(x;p) - y)^2
*/
private double cost(DMatrixRMaj param , DMatrixRMaj X , DMatrixRMaj Y)
{
func.compute(param,X, temp0);
double error = diffNormF(temp0,Y);
return error*error / (double)X.numRows;
}
/**
* Computes a simple numerical Jacobian.
*
* @param param The set of parameters that the Jacobian is to be computed at.
* @param pt The point around which the Jacobian is to be computed.
* @param deriv Where the jacobian will be stored
*/
protected void computeNumericalJacobian( DMatrixRMaj param ,
DMatrixRMaj pt ,
DMatrixRMaj deriv )
{
double invDelta = 1.0/DELTA;
func.compute(param,pt, temp0);
// compute the jacobian by perturbing the parameters slightly
// then seeing how it effects the results.
for( int i = 0; i < param.numRows; i++ ) {
param.data[i] += DELTA;
func.compute(param,pt, temp1);
// compute the difference between the two parameters and divide by the delta
add(invDelta,temp1,-invDelta,temp0,temp1);
// copy the results into the jacobian matrix
System.arraycopy(temp1.data,0,deriv.data,i*pt.numRows,pt.numRows);
param.data[i] -= DELTA;
}
}
/**
* The function that is being optimized.
*/
public interface Function {
/**
* Computes the output for each value in matrix x given the set of parameters.
*
* @param param The parameter for the function.
* @param x the input points.
* @param y the resulting output.
*/
public void compute(DMatrixRMaj param , DMatrixRMaj x , DMatrixRMaj y );
}
}
</syntaxhighlight>
642dc9e22a755f0db1155d5c519d5f49f549ebc6
Example Principal Component Analysis
0
13
236
126
2017-05-18T19:45:21Z
Peter
1
wikitext
text/x-wiki
Principal Component Analysis (PCA) is a popular and simple to implement classification technique, often used in face recognition. The following is an example of how to implement it in EJML using the procedural interface. It is assumed that the reader is already familiar with PCA.
External Resources
* [https://github.com/lessthanoptimal/ejml/blob/v0.31/examples/src/org/ejml/example/PrincipalComponentAnalysis.java PrincipalComponentAnalysis.java source code]
* [http://en.wikipedia.org/wiki/Principal_component_analysis General PCA information on Wikipedia]
* <disqus>Discuss this example</disqus>
= Sample Code =
<syntaxhighlight lang="java">
/**
* <p>
* The following is a simple example of how to perform basic principal component analysis in EJML.
* </p>
*
* <p>
* Principal Component Analysis (PCA) is typically used to develop a linear model for a set of data
* (e.g. face images) which can then be used to test for membership. PCA works by converting the
* set of data to a new basis that is a subspace of the original set. The subspace is selected
* to maximize information.
* </p>
* <p>
* PCA is typically derived as an eigenvalue problem. However in this implementation {@link org.ejml.interfaces.decomposition.SingularValueDecomposition SVD}
* is used instead because it will produce a more numerically stable solution. Computation using EVD requires explicitly
* computing the variance of each sample set. The variance is computed by squaring the residual, which can
* cause loss of precision.
* </p>
*
* <p>
* Usage:<br>
* 1) call setup()<br>
* 2) For each sample (e.g. an image ) call addSample()<br>
* 3) After all the samples have been added call computeBasis()<br>
* 4) Call sampleToEigenSpace() , eigenToSampleSpace() , errorMembership() , response()
* </p>
*
* @author Peter Abeles
*/
public class PrincipalComponentAnalysis {
// principal component subspace is stored in the rows
private DMatrixRMaj V_t;
// how many principal components are used
private int numComponents;
// where the data is stored
private DMatrixRMaj A = new DMatrixRMaj(1,1);
private int sampleIndex;
// mean values of each element across all the samples
double mean[];
public PrincipalComponentAnalysis() {
}
/**
* Must be called before any other functions. Declares and sets up internal data structures.
*
* @param numSamples Number of samples that will be processed.
* @param sampleSize Number of elements in each sample.
*/
public void setup( int numSamples , int sampleSize ) {
mean = new double[ sampleSize ];
A.reshape(numSamples,sampleSize,false);
sampleIndex = 0;
numComponents = -1;
}
/**
* Adds a new sample of the raw data to internal data structure for later processing. All the samples
* must be added before computeBasis is called.
*
* @param sampleData Sample from original raw data.
*/
public void addSample( double[] sampleData ) {
if( A.getNumCols() != sampleData.length )
throw new IllegalArgumentException("Unexpected sample size");
if( sampleIndex >= A.getNumRows() )
throw new IllegalArgumentException("Too many samples");
for( int i = 0; i < sampleData.length; i++ ) {
A.set(sampleIndex,i,sampleData[i]);
}
sampleIndex++;
}
/**
* Computes a basis (the principal components) from the most dominant eigenvectors.
*
* @param numComponents Number of vectors it will use to describe the data. Typically much
* smaller than the number of elements in the input vector.
*/
public void computeBasis( int numComponents ) {
if( numComponents > A.getNumCols() )
throw new IllegalArgumentException("More components requested that the data's length.");
if( sampleIndex != A.getNumRows() )
throw new IllegalArgumentException("Not all the data has been added");
if( numComponents > sampleIndex )
throw new IllegalArgumentException("More data needed to compute the desired number of components");
this.numComponents = numComponents;
// compute the mean of all the samples
for( int i = 0; i < A.getNumRows(); i++ ) {
for( int j = 0; j < mean.length; j++ ) {
mean[j] += A.get(i,j);
}
}
for( int j = 0; j < mean.length; j++ ) {
mean[j] /= A.getNumRows();
}
// subtract the mean from the original data
for( int i = 0; i < A.getNumRows(); i++ ) {
for( int j = 0; j < mean.length; j++ ) {
A.set(i,j,A.get(i,j)-mean[j]);
}
}
// Compute SVD and save time by not computing U
SingularValueDecomposition<DMatrixRMaj> svd =
DecompositionFactory_DDRM.svd(A.numRows, A.numCols, false, true, false);
if( !svd.decompose(A) )
throw new RuntimeException("SVD failed");
V_t = svd.getV(null,true);
DMatrixRMaj W = svd.getW(null);
// Singular values are in an arbitrary order initially
SingularOps_DDRM.descendingOrder(null,false,W,V_t,true);
// strip off unneeded components and find the basis
V_t.reshape(numComponents,mean.length,true);
}
/**
* Returns a vector from the PCA's basis.
*
* @param which Which component's vector is to be returned.
* @return Vector from the PCA basis.
*/
public double[] getBasisVector( int which ) {
if( which < 0 || which >= numComponents )
throw new IllegalArgumentException("Invalid component");
DMatrixRMaj v = new DMatrixRMaj(1,A.numCols);
CommonOps_DDRM.extract(V_t,which,which+1,0,A.numCols,v,0,0);
return v.data;
}
/**
* Converts a vector from sample space into eigen space.
*
* @param sampleData Sample space data.
* @return Eigen space projection.
*/
public double[] sampleToEigenSpace( double[] sampleData ) {
if( sampleData.length != A.getNumCols() )
throw new IllegalArgumentException("Unexpected sample length");
DMatrixRMaj mean = DMatrixRMaj.wrap(A.getNumCols(),1,this.mean);
DMatrixRMaj s = new DMatrixRMaj(A.getNumCols(),1,true,sampleData);
DMatrixRMaj r = new DMatrixRMaj(numComponents,1);
CommonOps_DDRM.subtract(s, mean, s);
CommonOps_DDRM.mult(V_t,s,r);
return r.data;
}
/**
* Converts a vector from eigen space into sample space.
*
* @param eigenData Eigen space data.
* @return Sample space projection.
*/
public double[] eigenToSampleSpace( double[] eigenData ) {
if( eigenData.length != numComponents )
throw new IllegalArgumentException("Unexpected sample length");
DMatrixRMaj s = new DMatrixRMaj(A.getNumCols(),1);
DMatrixRMaj r = DMatrixRMaj.wrap(numComponents,1,eigenData);
CommonOps_DDRM.multTransA(V_t,r,s);
DMatrixRMaj mean = DMatrixRMaj.wrap(A.getNumCols(),1,this.mean);
CommonOps_DDRM.add(s,mean,s);
return s.data;
}
/**
* <p>
* The membership error for a sample. If the error is less than a threshold then
* it can be considered a member. The threshold's value depends on the data set.
* </p>
* <p>
* The error is computed by projecting the sample into eigenspace then projecting
* it back into sample space and
* </p>
*
* @param sampleA The sample whose membership status is being considered.
* @return Its membership error.
*/
public double errorMembership( double[] sampleA ) {
double[] eig = sampleToEigenSpace(sampleA);
double[] reproj = eigenToSampleSpace(eig);
double total = 0;
for( int i = 0; i < reproj.length; i++ ) {
double d = sampleA[i] - reproj[i];
total += d*d;
}
return Math.sqrt(total);
}
/**
* Computes the dot product of each basis vector against the sample. Can be used as a measure
* for membership in the training sample set. High values correspond to a better fit.
*
* @param sample Sample of original data.
* @return Higher value indicates it is more likely to be a member of input dataset.
*/
public double response( double[] sample ) {
if( sample.length != A.numCols )
throw new IllegalArgumentException("Expected input vector to be in sample space");
DMatrixRMaj dots = new DMatrixRMaj(numComponents,1);
DMatrixRMaj s = DMatrixRMaj.wrap(A.numCols,1,sample);
CommonOps_DDRM.mult(V_t,s,dots);
return NormOps_DDRM.normF(dots);
}
}
</syntaxhighlight>
70efd25c291bd2f4eccb86309eaa934e6755c682
Example Polynomial Fitting
0
14
237
127
2017-05-18T19:49:52Z
Peter
1
wikitext
text/x-wiki
In this example it is shown how EJML can be used to fit a polynomial of arbitrary degree to a set of data. The key concepts shown here are; 1) how to create a linear using LinearSolverFactory, 2) use an adjustable linear solver, 3) and effective matrix reshaping. This is all done using the procedural interface.
First a best fit polynomial is fit to a set of data and then a outliers are removed from the observation set and the coefficients recomputed. Outliers are removed efficiently using an adjustable solver that does not resolve the whole system again.
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.31/examples/src/org/ejml/example/PolynomialFit.java PolynomialFit.java source code]
* <disqus>Discuss this example</disqus>
= PolynomialFit Example Code =
<syntaxhighlight lang="java">
/**
* <p>
* This example demonstrates how a polynomial can be fit to a set of data. This is done by
* using a least squares solver that is adjustable. By using an adjustable solver elements
* can be inexpensively removed and the coefficients recomputed. This is much less expensive
* than resolving the whole system from scratch.
* </p>
* <p>
* The following is demonstrated:<br>
* <ol>
* <li>Creating a solver using LinearSolverFactory</li>
* <li>Using an adjustable solver</li>
* <li>reshaping</li>
* </ol>
* @author Peter Abeles
*/
public class PolynomialFit {
// Vandermonde matrix
DMatrixRMaj A;
// matrix containing computed polynomial coefficients
DMatrixRMaj coef;
// observation matrix
DMatrixRMaj y;
// solver used to compute
AdjustableLinearSolver_DDRM solver;
/**
* Constructor.
*
* @param degree The polynomial's degree which is to be fit to the observations.
*/
public PolynomialFit( int degree ) {
coef = new DMatrixRMaj(degree+1,1);
A = new DMatrixRMaj(1,degree+1);
y = new DMatrixRMaj(1,1);
// create a solver that allows elements to be added or removed efficiently
solver = LinearSolverFactory_DDRM.adjustable();
}
/**
* Returns the computed coefficients
*
* @return polynomial coefficients that best fit the data.
*/
public double[] getCoef() {
return coef.data;
}
/**
* Computes the best fit set of polynomial coefficients to the provided observations.
*
* @param samplePoints where the observations were sampled.
* @param observations A set of observations.
*/
public void fit( double samplePoints[] , double[] observations ) {
// Create a copy of the observations and put it into a matrix
y.reshape(observations.length,1,false);
System.arraycopy(observations,0, y.data,0,observations.length);
// reshape the matrix to avoid unnecessarily declaring new memory
// save values is set to false since its old values don't matter
A.reshape(y.numRows, coef.numRows,false);
// set up the A matrix
for( int i = 0; i < observations.length; i++ ) {
double obs = 1;
for( int j = 0; j < coef.numRows; j++ ) {
A.set(i,j,obs);
obs *= samplePoints[i];
}
}
// process the A matrix and see if it failed
if( !solver.setA(A) )
throw new RuntimeException("Solver failed");
// solver the the coefficients
solver.solve(y,coef);
}
/**
* Removes the observation that fits the model the worst and recomputes the coefficients.
* This is done efficiently by using an adjustable solver. Often times the elements with
* the largest errors are outliers and not part of the system being modeled. By removing them
* a more accurate set of coefficients can be computed.
*/
public void removeWorstFit() {
// find the observation with the most error
int worstIndex=-1;
double worstError = -1;
for( int i = 0; i < y.numRows; i++ ) {
double predictedObs = 0;
for( int j = 0; j < coef.numRows; j++ ) {
predictedObs += A.get(i,j)*coef.get(j,0);
}
double error = Math.abs(predictedObs- y.get(i,0));
if( error > worstError ) {
worstError = error;
worstIndex = i;
}
}
// nothing left to remove, so just return
if( worstIndex == -1 )
return;
// remove that observation
removeObservation(worstIndex);
// update A
solver.removeRowFromA(worstIndex);
// solve for the parameters again
solver.solve(y,coef);
}
/**
* Removes an element from the observation matrix.
*
* @param index which element is to be removed
*/
private void removeObservation( int index ) {
final int N = y.numRows-1;
final double d[] = y.data;
// shift
for( int i = index; i < N; i++ ) {
d[i] = d[i+1];
}
y.numRows--;
}
}
</syntaxhighlight>
5e01fd05807d8a7070df804e0ebfac0d4a4dffdd
Example Polynomial Roots
0
15
238
128
2017-05-19T00:51:34Z
Peter
1
wikitext
text/x-wiki
Eigenvalue decomposition can be used to find the roots in a polynomial by constructing the so called [http://en.wikipedia.org/wiki/Companion_matrix companion matrix]. While faster techniques do exist for root finding, this is one of the most stable and probably the easiest to implement.
Because the companion matrix is not symmetric a generalized eigenvalue [MatrixDecomposition decomposition] is needed. The roots of the polynomial may also be [http://en.wikipedia.org/wiki/Complex_number complex].
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.31/examples/src/org/ejml/example/PolynomialRootFinder.java PolynomialRootFinder.java source code]
* <disqus>Discuss this example</disqus>
= Example Code =
<syntaxhighlight lang="java">
public class PolynomialRootFinder {
/**
* <p>
* Given a set of polynomial coefficients, compute the roots of the polynomial. Depending on
* the polynomial being considered the roots may contain complex number. When complex numbers are
* present they will come in pairs of complex conjugates.
* </p>
*
* <p>
* Coefficients are ordered from least to most significant, e.g: y = c[0] + x*c[1] + x*x*c[2].
* </p>
*
* @param coefficients Coefficients of the polynomial.
* @return The roots of the polynomial
*/
public static Complex_F64[] findRoots(double... coefficients) {
int N = coefficients.length-1;
// Construct the companion matrix
DMatrixRMaj c = new DMatrixRMaj(N,N);
double a = coefficients[N];
for( int i = 0; i < N; i++ ) {
c.set(i,N-1,-coefficients[i]/a);
}
for( int i = 1; i < N; i++ ) {
c.set(i,i-1,1);
}
// use generalized eigenvalue decomposition to find the roots
EigenDecomposition_F64<DMatrixRMaj> evd = DecompositionFactory_DDRM.eig(N,false);
evd.decompose(c);
Complex_F64[] roots = new Complex_F64[N];
for( int i = 0; i < N; i++ ) {
roots[i] = evd.getEigenvalue(i);
}
return roots;
}
}
</syntaxhighlight>
26234acbf1c0b8eab6ca660228ab4dc173cbf9ee
Example Customizing Equations
0
19
239
129
2017-05-19T00:52:13Z
Peter
1
wikitext
text/x-wiki
While Equations provides many of the most common functions used in Linear Algebra, there are many it does not provide. The following example demonstrates how to add your own functions to Equations allowing you to extend its capabilities.
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.31/examples/src/org/ejml/example/EquationCustomFunction.java EquationCustomFunction.java source code]
* <disqus>Discuss this example</disqus>
== Example ==
<syntaxhighlight lang="java">
/**
* Demonstration on how to create and use a custom function in Equation. A custom function must implement
* ManagerFunctions.Input1 or ManagerFunctions.InputN, depending on the number of inputs it takes.
*
* @author Peter Abeles
*/
public class EquationCustomFunction {
public static void main(String[] args) {
Random rand = new Random(234);
Equation eq = new Equation();
eq.getFunctions().add("multTransA",createMultTransA());
SimpleMatrix A = new SimpleMatrix(1,1); // will be resized
SimpleMatrix B = SimpleMatrix.random64(3,4,-1,1,rand);
SimpleMatrix C = SimpleMatrix.random64(3,4,-1,1,rand);
eq.alias(A,"A",B,"B",C,"C");
eq.process("A=multTransA(B,C)");
System.out.println("Found");
System.out.println(A);
System.out.println("Expected");
B.transpose().mult(C).print();
}
/**
* Create the function. Be sure to handle all possible input types and combinations correctly and provide
* meaningful error messages. The output matrix should be resized to fit the inputs.
*/
public static ManagerFunctions.InputN createMultTransA() {
return new ManagerFunctions.InputN() {
@Override
public Operation.Info create(List<Variable> inputs, ManagerTempVariables manager ) {
if( inputs.size() != 2 )
throw new RuntimeException("Two inputs required");
final Variable varA = inputs.get(0);
final Variable varB = inputs.get(1);
Operation.Info ret = new Operation.Info();
if( varA instanceof VariableMatrix && varB instanceof VariableMatrix ) {
// The output matrix or scalar variable must be created with the provided manager
final VariableMatrix output = manager.createMatrix();
ret.output = output;
ret.op = new Operation("multTransA-mm") {
@Override
public void process() {
DMatrixRMaj mA = ((VariableMatrix)varA).matrix;
DMatrixRMaj mB = ((VariableMatrix)varB).matrix;
output.matrix.reshape(mA.numCols,mB.numCols);
CommonOps_DDRM.multTransA(mA,mB,output.matrix);
}
};
} else {
throw new IllegalArgumentException("Expected both inputs to be a matrix");
}
return ret;
}
};
}
}
</syntaxhighlight>
a8850db4f97790943999dda4722f3b29c39c3294
Example Customizing SimpleMatrix
0
16
240
130
2017-05-19T00:53:08Z
Peter
1
wikitext
text/x-wiki
[[SimpleMatrix]] provides an easy to use object oriented way of doing linear algebra. There are many other problems which use matrices and could use SimpleMatrix's functionality. In those situations it is desirable to simply extend SimpleMatrix and add additional functions as needed.
Naively extending SimpleMatrix is problematic because internally SimpleMatrix creates new matrices and its functions returned objects of the wrong type. To get around these problems SimpleBase is extended instead and its abstract functions implemented. SimpleBase provides all the core functionality of SimpleMatrix, with the exception of its static functions.
An example is provided below where a new class called StatisticsMatrix is created that adds statistical functions to SimpleMatrix. Usage examples are provided in its main() function.
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.31/examples/src/org/ejml/example/StatisticsMatrix.java StatisticsMatrix.java source code]
* <disqus>Discuss this example</disqus>
= Example =
<syntaxhighlight lang="java">
/**
* Example of how to extend "SimpleMatrix" and add your own functionality. In this case
* two basic statistic operations are added. Since SimpleBase is extended and StatisticsMatrix
* is specified as the generics type, all "SimpleMatrix" operations return a matrix of
* type StatisticsMatrix, ensuring strong typing.
*
* @author Peter Abeles
*/
public class StatisticsMatrix extends SimpleBase<StatisticsMatrix> {
public StatisticsMatrix( int numRows , int numCols ) {
super(numRows,numCols);
}
protected StatisticsMatrix(){}
/**
* Wraps a StatisticsMatrix around 'm'. Does NOT create a copy of 'm' but saves a reference
* to it.
*/
public static StatisticsMatrix wrap( DMatrixRMaj m ) {
StatisticsMatrix ret = new StatisticsMatrix();
ret.mat = m;
return ret;
}
/**
* Computes the mean or average of all the elements.
*
* @return mean
*/
public double mean() {
double total = 0;
final int N = getNumElements();
for( int i = 0; i < N; i++ ) {
total += get(i);
}
return total/N;
}
/**
* Computes the unbiased standard deviation of all the elements.
*
* @return standard deviation
*/
public double stdev() {
double m = mean();
double total = 0;
final int N = getNumElements();
if( N <= 1 )
throw new IllegalArgumentException("There must be more than one element to compute stdev");
for( int i = 0; i < N; i++ ) {
double x = get(i);
total += (x - m)*(x - m);
}
total /= (N-1);
return Math.sqrt(total);
}
/**
* Returns a matrix of StatisticsMatrix type so that SimpleMatrix functions create matrices
* of the correct type.
*/
@Override
protected StatisticsMatrix createMatrix(int numRows, int numCols) {
return new StatisticsMatrix(numRows,numCols);
}
public static void main( String args[] ) {
Random rand = new Random(24234);
int N = 500;
// create two vectors whose elements are drawn from uniform distributions
StatisticsMatrix A = StatisticsMatrix.wrap(RandomMatrices_DDRM.rectangle(N,1,0,1,rand));
StatisticsMatrix B = StatisticsMatrix.wrap(RandomMatrices_DDRM.rectangle(N,1,1,2,rand));
// the mean should be about 0.5
System.out.println("Mean of A is "+A.mean());
// the mean should be about 1.5
System.out.println("Mean of B is "+B.mean());
StatisticsMatrix C = A.plus(B);
// the mean should be about 2.0
System.out.println("Mean of C = A + B is "+C.mean());
System.out.println("Standard deviation of A is "+A.stdev());
System.out.println("Standard deviation of B is "+B.stdev());
System.out.println("Standard deviation of C is "+C.stdev());
}
}
</syntaxhighlight>
42a1aa1f604c2f5f79414f314cc7fee52cd1000d
Example Fixed Sized Matrices
0
17
241
131
2017-05-19T01:15:50Z
Peter
1
wikitext
text/x-wiki
Array access adds a significant amount of overhead to matrix operations. A fixed sized matrix gets around that issue by having each element in the matrix be a variable in the class. EJML provides support for fixed sized matrices and vectors up to 6x6, at which point it loses its advantage. The example below demonstrates how to use a fixed sized matrix and convert to other matrix types in EJML.
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.31/examples/src/org/ejml/example/ExampleFixedSizedMatrix.java ExampleFixedSizedMatrix]
* <disqus>Discuss this example</disqus>
== Example ==
<syntaxhighlight lang="java">
/**
* In some applications a small fixed sized matrix can speed things up a lot, e.g. 8 times faster. One application
* which uses small matrices is graphics and rigid body motion, which extensively uses 3x3 and 4x4 matrices. This
* example is to show some examples of how you can use a fixed sized matrix.
*
* @author Peter Abeles
*/
public class ExampleFixedSizedMatrix {
public static void main( String args[] ) {
// declare the matrix
DMatrix3x3 a = new DMatrix3x3();
DMatrix3x3 b = new DMatrix3x3();
// Can assign values the usual way
for( int i = 0; i < 3; i++ ) {
for( int j = 0; j < 3; j++ ) {
a.set(i,j,i+j+1);
}
}
// Direct manipulation of each value is the fastest way to assign/read values
a.a11 = 12;
a.a23 = 64;
// can print the usual way too
a.print();
// most of the standard operations are support
CommonOps_DDF3.transpose(a,b);
b.print();
System.out.println("Determinant = "+ CommonOps_DDF3.det(a));
// matrix-vector operations are also supported
// Constructors for vectors and matrices can be used to initialize its value
DMatrix3 v = new DMatrix3(1,2,3);
DMatrix3 result = new DMatrix3();
CommonOps_DDF3.mult(a,v,result);
// Conversion into DMatrixRMaj can also be done
DMatrixRMaj dm = ConvertDMatrixStruct.convert(a,null);
dm.print();
// This can be useful if you need do more advanced operations
SimpleMatrix sv = SimpleMatrix.wrap(dm).svd().getV();
// can then convert it back into a fixed matrix
DMatrix3x3 fv = ConvertDMatrixStruct.convert(sv.matrix_F64(),(DMatrix3x3)null);
System.out.println("Original simple matrix and converted fixed matrix");
sv.print();
fv.print();
}
}
</syntaxhighlight>
e6920ff566bb03f71af25ba1c735fe45aff272df
Example Complex Math
0
27
242
132
2017-05-19T01:16:23Z
Peter
1
wikitext
text/x-wiki
The Complex64F data type stores a single complex number. Inside the ComplexMath64F class are functions for performing standard math operations on Complex64F, such as addition and division. The example below demonstrates how to perform these operations.
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.31/examples/src/org/ejml/example/ExampleComplexMath.java ExampleComplexMath.java source code]
* <disqus>Discuss this example</disqus>
== Example ==
<syntaxhighlight lang="java">
/**
* Demonstration of different operations that can be performed on complex numbers.
*
* @author Peter Abeles
*/
public class ExampleComplexMath {
public static void main( String []args ) {
Complex_F64 a = new Complex_F64(1,2);
Complex_F64 b = new Complex_F64(-1,-0.6);
Complex_F64 c = new Complex_F64();
ComplexPolar_F64 polarC = new ComplexPolar_F64();
System.out.println("a = "+a);
System.out.println("b = "+b);
System.out.println("------------------");
ComplexMath_F64.plus(a, b, c);
System.out.println("a + b = "+c);
ComplexMath_F64.minus(a, b, c);
System.out.println("a - b = "+c);
ComplexMath_F64.multiply(a, b, c);
System.out.println("a * b = "+c);
ComplexMath_F64.divide(a, b, c);
System.out.println("a / b = "+c);
System.out.println("------------------");
ComplexPolar_F64 polarA = new ComplexPolar_F64();
ComplexMath_F64.convert(a, polarA);
System.out.println("polar notation of a = "+polarA);
ComplexMath_F64.pow(polarA, 3, polarC);
System.out.println("a ** 3 = "+polarC);
ComplexMath_F64.convert(polarC, c);
System.out.println("a ** 3 = "+c);
}
}
</syntaxhighlight>
c40957dc92ff90dc02b0f2f3dfe7a79cf2aa448f
Download
0
6
245
206
2017-09-18T14:23:58Z
Peter
1
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
Current status of developmental code:
[https://travis-ci.org/lessthanoptimal/ejml https://api.travis-ci.org/lessthanoptimal/ejml.png]
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.31/ EJML Downloads]
== Gradle and Maven ==
EJML is broken up into several packages (see list below) and including each individually can be tedious. To include all the packages simply reference "all", as is shown below:
Gradle:
<syntaxhighlight lang="groovy">
compile group: 'org.ejml', name: 'ejml-all', version: '0.32'
</syntaxhighlight>
Maven:
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>ejml-all</artifactId>
<version>0.32</version>
</dependency>
</syntaxhighlight>
Individual modules:
{| class="wikitable"
! Module Name !! Description
|-
| ejml-all || All the modules
|-
| ejml-ddense || Dense Real Double Matrices
|-
| ejml-fdense || Dense Real Float Matrices
|-
| ejml-zdense || Dense Complex Double Matrices
|-
| ejml-cdense || Dense Complex Float Matrices
|-
| ejml-simple || SimpleMatrix and Equations
|-
| ejml-dsparse || Sparse Real Double Matrices
|}
bd09d23316d0ee8ddc6ed237a81fe2450b9285ab
246
245
2017-09-18T14:24:20Z
Peter
1
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
Current status of developmental code:
[https://travis-ci.org/lessthanoptimal/ejml https://api.travis-ci.org/lessthanoptimal/ejml.png]
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.32/ EJML Downloads]
== Gradle and Maven ==
EJML is broken up into several packages (see list below) and including each individually can be tedious. To include all the packages simply reference "all", as is shown below:
Gradle:
<syntaxhighlight lang="groovy">
compile group: 'org.ejml', name: 'ejml-all', version: '0.32'
</syntaxhighlight>
Maven:
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>ejml-all</artifactId>
<version>0.32</version>
</dependency>
</syntaxhighlight>
Individual modules:
{| class="wikitable"
! Module Name !! Description
|-
| ejml-all || All the modules
|-
| ejml-ddense || Dense Real Double Matrices
|-
| ejml-fdense || Dense Real Float Matrices
|-
| ejml-zdense || Dense Complex Double Matrices
|-
| ejml-cdense || Dense Complex Float Matrices
|-
| ejml-simple || SimpleMatrix and Equations
|-
| ejml-dsparse || Sparse Real Double Matrices
|}
457dec41f8e88253b61b28ec9490ca6ac360b2db
253
246
2018-01-18T05:20:46Z
Peter
1
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
Current status of developmental code:
[https://travis-ci.org/lessthanoptimal/ejml https://api.travis-ci.org/lessthanoptimal/ejml.png]
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.33/ EJML Downloads]
== Gradle and Maven ==
EJML is broken up into several packages (see list below) and including each individually can be tedious. To include all the packages simply reference "all", as is shown below:
Gradle:
<syntaxhighlight lang="groovy">
compile group: 'org.ejml', name: 'ejml-all', version: '0.33'
</syntaxhighlight>
Maven:
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>ejml-all</artifactId>
<version>0.33</version>
</dependency>
</syntaxhighlight>
Individual modules:
{| class="wikitable"
! Module Name !! Description
|-
| ejml-all || All the modules
|-
| ejml-ddense || Dense Real Double Matrices
|-
| ejml-fdense || Dense Real Float Matrices
|-
| ejml-zdense || Dense Complex Double Matrices
|-
| ejml-cdense || Dense Complex Float Matrices
|-
| ejml-simple || SimpleMatrix and Equations
|-
| ejml-dsparse || Sparse Real Double Matrices
|}
871cf911e48220fd3203ed7dbb1640acd05213f1
256
253
2018-04-13T16:22:27Z
Peter
1
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
Current status of developmental code:
[https://travis-ci.org/lessthanoptimal/ejml https://api.travis-ci.org/lessthanoptimal/ejml.png]
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.34/ EJML Downloads]
== Gradle and Maven ==
EJML is broken up into several packages (see list below) and including each individually can be tedious. To include all the packages simply reference "all", as is shown below:
Gradle:
<syntaxhighlight lang="groovy">
compile group: 'org.ejml', name: 'ejml-all', version: '0.34'
</syntaxhighlight>
Maven:
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>ejml-all</artifactId>
<version>0.33</version>
</dependency>
</syntaxhighlight>
Individual modules:
{| class="wikitable"
! Module Name !! Description
|-
| ejml-all || All the modules
|-
| ejml-ddense || Dense Real Double Matrices
|-
| ejml-fdense || Dense Real Float Matrices
|-
| ejml-zdense || Dense Complex Double Matrices
|-
| ejml-cdense || Dense Complex Float Matrices
|-
| ejml-simple || SimpleMatrix and Equations
|-
| ejml-dsparse || Sparse Real Double Matrices
|}
30850a767787d640eae05771b76daae32748f95a
257
256
2018-04-13T16:22:39Z
Peter
1
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
Current status of developmental code:
[https://travis-ci.org/lessthanoptimal/ejml https://api.travis-ci.org/lessthanoptimal/ejml.png]
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.34/ EJML Downloads]
== Gradle and Maven ==
EJML is broken up into several packages (see list below) and including each individually can be tedious. To include all the packages simply reference "all", as is shown below:
Gradle:
<syntaxhighlight lang="groovy">
compile group: 'org.ejml', name: 'ejml-all', version: '0.34'
</syntaxhighlight>
Maven:
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>ejml-all</artifactId>
<version>0.34</version>
</dependency>
</syntaxhighlight>
Individual modules:
{| class="wikitable"
! Module Name !! Description
|-
| ejml-all || All the modules
|-
| ejml-ddense || Dense Real Double Matrices
|-
| ejml-fdense || Dense Real Float Matrices
|-
| ejml-zdense || Dense Complex Double Matrices
|-
| ejml-cdense || Dense Complex Float Matrices
|-
| ejml-simple || SimpleMatrix and Equations
|-
| ejml-dsparse || Sparse Real Double Matrices
|}
cbf573d5ff42b5e04472fa0ead9d3da70c3f92d1
Example Sparse Matrices
0
60
249
2017-09-18T15:23:20Z
Peter
1
Created page with " Support for sparse matrices has recently been added to EJML. It supports many but not all of the standard operations that are supported for dense matrics. The code below show..."
wikitext
text/x-wiki
Support for sparse matrices has recently been added to EJML. It supports many but not all of the standard operations that are supported for dense matrics. The code below shows the basics of working with a sparse matrix. In some situations the speed improvement of using a sparse matrix can be substantial. Do note that if your system isn't sparse enough or if its structure isn't advantageous it could run even slower using sparse operations!
<center>
{| class="wikitable"
! Type !! Execution Time (ms)
|-
| Dense || 12660
|-
| Sparse || 1642
|}
</center>
== Sparse Matrix Example ==
<syntaxhighlight lang="java">
/**
* Example showing how to construct and solve a linear system using sparse matrices
*
* @author Peter Abeles
*/
public class ExampleSparseMatrix {
public static int ROWS = 100000;
public static int COLS = 1000;
public static int XCOLS = 1;
public static void main(String[] args) {
Random rand = new Random(234);
// easy to work with sparse format, but hard to do computations with
DMatrixSparseTriplet work = new DMatrixSparseTriplet(5,4,5);
work.addItem(0,1,1.2);
work.addItem(3,0,3);
work.addItem(1,1,22.21234);
work.addItem(2,3,6);
// convert into a format that's easier to perform math with
DMatrixSparseCSC Z = ConvertDMatrixStruct.convert(work,(DMatrixSparseCSC)null);
// print the matrix to standard out in two different formats
Z.print();
System.out.println();
Z.printNonZero();
System.out.println();
// Create a large matrix that is 5% filled
DMatrixSparseCSC A = RandomMatrices_DSCC.rectangle(ROWS,COLS,(int)(ROWS*COLS*0.05),rand);
// large vector that is 70% filled
DMatrixSparseCSC x = RandomMatrices_DSCC.rectangle(COLS,XCOLS,(int)(XCOLS*COLS*0.7),rand);
System.out.println("Done generating random matrices");
// storage for the initial solution
DMatrixSparseCSC y = new DMatrixSparseCSC(ROWS,XCOLS,0);
DMatrixSparseCSC z = new DMatrixSparseCSC(ROWS,XCOLS,0);
// To demonstration how to perform sparse math let's multiply:
// y=A*x
// Optional storage is set to null so that it will declare it internally
long before = System.currentTimeMillis();
IGrowArray workA = new IGrowArray(A.numRows);
DGrowArray workB = new DGrowArray(A.numRows);
for (int i = 0; i < 100; i++) {
CommonOps_DSCC.mult(A,x,y,workA,workB);
CommonOps_DSCC.add(1.5,y,0.75,y,z,workA,workB);
}
long after = System.currentTimeMillis();
System.out.println("norm = "+ NormOps_DSCC.fastNormF(y)+" sparse time = "+(after-before)+" ms");
DMatrixRMaj Ad = ConvertDMatrixStruct.convert(A,(DMatrixRMaj)null);
DMatrixRMaj xd = ConvertDMatrixStruct.convert(x,(DMatrixRMaj)null);
DMatrixRMaj yd = new DMatrixRMaj(y.numRows,y.numCols);
DMatrixRMaj zd = new DMatrixRMaj(y.numRows,y.numCols);
before = System.currentTimeMillis();
for (int i = 0; i < 100; i++) {
CommonOps_DDRM.mult(Ad, xd, yd);
CommonOps_DDRM.add(1.5,yd,0.75, yd, zd);
}
after = System.currentTimeMillis();
System.out.println("norm = "+ NormOps_DDRM.fastNormF(yd)+" dense time = "+(after-before)+" ms");
}
}
</syntaxhighlight>
1be0c2034e1a73acbceec06796ec5d89bb9cbac4
Frequently Asked Questions
0
4
258
208
2018-06-03T22:27:19Z
Peter
1
wikitext
text/x-wiki
#summary Frequently Asked Questions
= Frequently Asked Questions=
Here is a list of frequently asked questions about EJML. Most of these questions have been asked and answered several times already.
== Why does EJML crash when I try to process a very large matrix? ==
If you are working with large matrices first do a quick sanity check. Ask yourself, how much memory is that matrix using and can my computer physically store it? Compute the number of required gigabytes with the following equation:
memory in gigabytes = (columns * rows*8)/(1024*1024*1024)
Now take the number and multiply it by 3 or 4 to take in account overhead/working memory and that's about how much memory your system will need to do anything useful. This is true for ALL dense linear algebra libraries. EJML is also limited by the size of a Java array, which can have at most 2^32 elements. If you are lucky the system is sparse (mostly zeros) and there problem might actually be feasible using other libraries, see below.
The other potentially fatal problem is that very large matrices are very slow to process. So even if you have enough RAM on your computer the time to compute the solution could well exceed the lifetime of a typical human.
== Will EJML work on Android? ==
Yes EJML has been used for quite some time on Android. The library does include a tinny bit of swing code, which will not cause any problems as long as you do not call anything related to visualization. In Android Studio simply reference the latest jar on the Maven central repository. See [Download] for how to do that.
== Multi-Threaded ==
Currently EJML is entirely single threaded. The plan is to max out single threaded performance by finishing block algorithms implementations, then declare the library to be at version 1.0. After that has happened, start work on multi-threaded implementations. However, there is no schedule in place for when all this will happen.
The main driving factor for when major new features are added is when I personally need such a feature. I'm starting to work on larger scale machine learning problems, so there might be a need soon. Another way to speed up the process is to volunteer your time and help develop it.
== Sparse Matrix Support ==
EJML has support for sparse matrices! Standard linear algebra operations and LU, Cholesky, and QR decompositions. The decompositions are based on CSparse.
== How do I do cross product? ==
Cross product and other geometric operations are outside of the scope of EJML. EJML is focused on linear algebra and does not aim to replicate tools like Matlab.
== What version of Java? ==
EJML can be compiled with Java 1.8 and beyond. With a few minor modifications to the source code you can get it to compile with 1.5.
a96ef5eeb0872edd06a08c16a9d81f9561e4d8b1
276
258
2019-08-15T02:11:42Z
Peter
1
wikitext
text/x-wiki
#summary Frequently Asked Questions
= Frequently Asked Questions=
Here is a list of frequently asked questions about EJML. Most of these questions have been asked and answered several times already.
== Why does EJML crash when I try to process a very large matrix? ==
If you are working with large matrices first do a quick sanity check. Ask yourself, how much memory is that matrix using and can my computer physically store it? Compute the number of required gigabytes with the following equation:
memory in gigabytes = (columns * rows*8)/(1024*1024*1024)
Now take the number and multiply it by 3 or 4 to take in account overhead/working memory and that's about how much memory your system will need to do anything useful. This is true for ALL dense linear algebra libraries. EJML is also limited by the size of a Java array, which can have at most 2^32 elements. If you are lucky the system is sparse (mostly zeros) and there problem might actually be feasible using other libraries, see below.
The other potentially fatal problem is that very large matrices are very slow to process. So even if you have enough RAM on your computer the time to compute the solution could well exceed the lifetime of a typical human.
== Will EJML work on Android? ==
Yes EJML has been used for quite some time on Android. The library does include a tinny bit of swing code, which will not cause any problems as long as you do not call anything related to visualization. In Android Studio simply reference the latest jar on the Maven central repository. See [Download] for how to do that.
== Multi-Threaded ==
Currently EJML is entirely single threaded. The plan is to max out single threaded performance by finishing block algorithms implementations, then declare the library to be at version 1.0. After that has happened, start work on multi-threaded implementations. However, there is no schedule in place for when all this will happen.
The main driving factor for when major new features are added is when I personally need such a feature. I'm starting to work on larger scale machine learning problems, so there might be a need soon. Another way to speed up the process is to volunteer your time and help develop it.
== Sparse Matrix Support ==
EJML has support for sparse matrices! Standard linear algebra operations and LU, Cholesky, and QR decompositions. The decompositions are based on CSparse.
== How do I do cross product? ==
Cross product and other geometric operations are outside of the scope of EJML. EJML is focused on linear algebra and does not aim to replicate tools like Matlab.
== Which Java version is required? ==
EJML can be compiled with Java 1.8 and beyond.
== What Version of EJML Supports Java X ==
With a bit of effort its possible to modify EJML to build on ancient version of Java. Mostly requires stripping out annotations and hunting down a few annoying changes in language. That said, if you want to use an official release your best bet is using an older version.
{| class="wikitable" style="text-align: center;"
|+ Legacy Java
|-
! Java Version
! EJML
|-
| 1.8 + || [https://github.com/lessthanoptimal/ejml/tree/master Current]
|-
| 1.7 || [https://github.com/lessthanoptimal/ejml/releases/tag/v0.33 v0.33]
|-
| 1.5 || ????
|}
5363c2f919c51f082b5e0d6f0711ff6edce20ea7
Main Page
0
1
259
255
2018-06-14T00:11:30Z
Peter
1
/* Functionality */
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating real/complex/dense/sparse matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.34''
|-
| '''Date:''' ''April 13, 2018''
|-
| [https://github.com/lessthanoptimal/ejml/blob/master/convert_to_ejml31.py Upgrade Script]
|-
| [https://github.com/lessthanoptimal/ejml/blob/v0.34/change.txt Change Log]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
** Matrix 2x2 to 6x6
** Vector 2 to 6
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
* Sparse Real
** Compressed Column
| style="vertical-align:top;" |
* Full support for floats and doubles
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
b8e76a2422d4bf24ec8f1b3f434d3828e520e5cf
262
259
2018-08-24T14:05:27Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating real/complex/dense/sparse matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.35''
|-
| '''Date:''' ''August 24, 2018''
|-
| [https://github.com/lessthanoptimal/ejml/blob/master/convert_to_ejml31.py v0.31 Upgrade Script]
|-
| [https://github.com/lessthanoptimal/ejml/blob/v0.35/change.txt Change Log]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
** Matrix 2x2 to 6x6
** Vector 2 to 6
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
* Sparse Real
** Compressed Column
| style="vertical-align:top;" |
* Full support for floats and doubles
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
a1e8b4cddd399a81ee65e10eb3ba0ede8a3b2b77
265
262
2018-09-29T15:03:39Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating real/complex/dense/sparse matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.36''
|-
| '''Date:''' ''September 29, 2018''
|-
| [https://github.com/lessthanoptimal/ejml/blob/master/convert_to_ejml31.py v0.31 Upgrade Script]
|-
| [https://github.com/lessthanoptimal/ejml/blob/v0.36/change.txt Change Log]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
** Matrix 2x2 to 6x6
** Vector 2 to 6
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
* Sparse Real
** Compressed Column
| style="vertical-align:top;" |
* Full support for floats and doubles
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
ca00c4069f81bcdbacab252c44cc265c62d79269
268
265
2018-11-12T06:50:37Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating real/complex/dense/sparse matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.37''
|-
| '''Date:''' ''November 11, 2018''
|-
| [https://github.com/lessthanoptimal/ejml/blob/master/convert_to_ejml31.py v0.31 Upgrade Script]
|-
| [https://github.com/lessthanoptimal/ejml/blob/v0.37/change.txt Change Log]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
** Matrix 2x2 to 6x6
** Vector 2 to 6
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
* Sparse Real
** Compressed Column
| style="vertical-align:top;" |
* Full support for floats and doubles
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
a90eec6233e04ebab40e827cbdac27617031b8cf
270
268
2018-12-26T03:22:08Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating real/complex/dense/sparse matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.37.1''
|-
| '''Date:''' ''December 25, 2018''
|-
| [https://github.com/lessthanoptimal/ejml/blob/master/convert_to_ejml31.py v0.31 Upgrade Script]
|-
| [https://github.com/lessthanoptimal/ejml/blob/v0.37/change.txt Change Log]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
** Matrix 2x2 to 6x6
** Vector 2 to 6
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
* Sparse Real
** Compressed Column
| style="vertical-align:top;" |
* Full support for floats and doubles
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
412cbba00a2135e4b864adf33a0af2c41d42c76e
271
270
2018-12-26T03:22:27Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating real/complex/dense/sparse matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.37.1''
|-
| '''Date:''' ''December 25, 2018''
|-
| [https://github.com/lessthanoptimal/ejml/blob/master/convert_to_ejml31.py v0.31 Upgrade Script]
|-
| [https://github.com/lessthanoptimal/ejml/blob/v0.37.1/change.txt Change Log]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
** Matrix 2x2 to 6x6
** Vector 2 to 6
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
* Sparse Real
** Compressed Column
| style="vertical-align:top;" |
* Full support for floats and doubles
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
bd1e575771b4084f9046e5cc15a862fcea13de87
275
271
2019-03-14T04:24:10Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating real/complex/dense/sparse matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.38''
|-
| '''Date:''' ''March 13, 2019''
|-
| [https://github.com/lessthanoptimal/ejml/blob/master/convert_to_ejml31.py v0.31 Upgrade Script]
|-
| [https://github.com/lessthanoptimal/ejml/blob/v0.38/change.txt Change Log]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
** Matrix 2x2 to 6x6
** Vector 2 to 6
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
* Sparse Real
** Compressed Column
| style="vertical-align:top;" |
* Full support for floats and doubles
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
c51c2445870f4d0be248d48968d303ca84fdb890
277
275
2020-04-07T01:19:51Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating real/complex/dense/sparse matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.39''
|-
| '''Date:''' ''April 6, 2020''
|-
| [https://github.com/lessthanoptimal/ejml/blob/master/convert_to_ejml31.py v0.31 Upgrade Script]
|-
| [https://github.com/lessthanoptimal/ejml/blob/v0.39/change.txt Change Log]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
** Matrix 2x2 to 6x6
** Vector 2 to 6
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
* Sparse Real
** Compressed Column
| style="vertical-align:top;" |
* Full support for floats and doubles
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
a64b5c2768c932954450b013d072dbb512040f20
279
277
2020-10-29T13:39:40Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating real/complex/dense/sparse matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.39''
|-
| '''Date:''' ''April 6, 2020''
|-
| [https://github.com/lessthanoptimal/ejml/blob/master/convert_to_ejml31.py v0.31 Upgrade Script]
|-
| [https://github.com/lessthanoptimal/ejml/blob/v0.39/change.txt Change Log]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|-
| [https://github.com/lessthanoptimal/ejml/blob/SNAPSHOT/main/ejml-kotlin/src/Extensions_F64.kt Kotlin]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
** Matrix 2x2 to 6x6
** Vector 2 to 6
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
* Sparse Real
** Compressed Column
| style="vertical-align:top;" |
* Full support for floats and doubles
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
9a31e81c14b79507570dea5d81c4eb6de84d01b9
281
279
2020-10-29T13:41:49Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating real/complex/dense/sparse matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.39''
|-
| '''Date:''' ''April 6, 2020''
|-
| [https://github.com/lessthanoptimal/ejml/blob/master/convert_to_ejml31.py v0.31 Upgrade Script]
|-
| [https://github.com/lessthanoptimal/ejml/blob/v0.39/change.txt Change Log]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|-
| [[Kotlin|Kotlin]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
** Matrix 2x2 to 6x6
** Vector 2 to 6
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
* Sparse Real
** Compressed Column
| style="vertical-align:top;" |
* Full support for floats and doubles
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
9172b8ddc5dcaa612d2db2c06e33c661dd49ee64
292
281
2020-11-05T05:31:26Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating real/complex/dense/sparse matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.40''
|-
| '''Date:''' ''November 4, 2020''
|-
| [https://github.com/lessthanoptimal/ejml/blob/master/convert_to_ejml31.py v0.31 Upgrade Script]
|-
| [https://github.com/lessthanoptimal/ejml/blob/v0.40/change.txt Change Log]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|-
| [[Kotlin|Kotlin]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
** Matrix 2x2 to 6x6
** Vector 2 to 6
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
* Sparse Real
** Compressed Column
| style="vertical-align:top;" |
* Full support for floats and doubles
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
9b695c92c031b72a236a5d254268658ced293f21
294
292
2020-11-05T05:37:12Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating real/complex/dense/sparse matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for small and large, dense and sparse, real and complex matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.40''
|-
| '''Date:''' ''November 4, 2020''
|-
| [https://github.com/lessthanoptimal/ejml/blob/master/convert_to_ejml31.py v0.31 Upgrade Script]
|-
| [https://github.com/lessthanoptimal/ejml/blob/v0.40/change.txt Change Log]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|-
| [[Kotlin|Kotlin]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
** Matrix 2x2 to 6x6
** Vector 2 to 6
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
* Sparse Real
** Compressed Column
| style="vertical-align:top;" |
* Full support for floats and doubles
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
EJML is currently a single threaded library only. Multi threaded work will start once block implementations of SVD and Eigenvalue are finished.
</center>
c7c72da737b44a8f4712a89751a776048a99e99f
306
294
2021-02-18T02:44:08Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating real/complex/dense/sparse matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for small and large, dense and sparse, real and complex matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.40''
|-
| '''Date:''' ''November 4, 2020''
|-
| [https://github.com/lessthanoptimal/ejml/blob/master/convert_to_ejml31.py v0.31 Upgrade Script]
|-
| [https://github.com/lessthanoptimal/ejml/blob/v0.40/change.txt Change Log]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|-
| [[Kotlin|Kotlin]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== News ==
{| width="500pt" |
| -
|
* Read and write EJML in Matlab format with [https://github.com/HebiRobotics/MFL MFL] from HEBI Robotics
* Graph BLAS continues to be flushed out with masks being added to latest SNAPSHOT
* Concurrency/threading has been added to some operations
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
** Matrix 2x2 to 6x6
** Vector 2 to 6
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
* Sparse Real
** Compressed Column
| style="vertical-align:top;" |
* Full support for floats and doubles
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
</center>
41ce57a85849556e255db1ea6478080e95373ac2
307
306
2021-02-18T02:45:41Z
Peter
1
/* News */
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating real/complex/dense/sparse matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for small and large, dense and sparse, real and complex matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.40''
|-
| '''Date:''' ''November 4, 2020''
|-
| [https://github.com/lessthanoptimal/ejml/blob/master/convert_to_ejml31.py v0.31 Upgrade Script]
|-
| [https://github.com/lessthanoptimal/ejml/blob/v0.40/change.txt Change Log]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|-
| [[Kotlin|Kotlin]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== News 2021 ==
{| width="500pt" |
| -
|
* Read and write EJML in Matlab format with [https://github.com/HebiRobotics/MFL MFL] from HEBI Robotics
* Graph BLAS continues to be flushed out with masks being added to latest SNAPSHOT
* Concurrency/threading has been added to some operations
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
** Matrix 2x2 to 6x6
** Vector 2 to 6
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
* Sparse Real
** Compressed Column
| style="vertical-align:top;" |
* Full support for floats and doubles
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
</center>
c99b6c2891221d8635e92d39d0b356c1f152e498
Example Levenberg-Marquardt
0
12
260
235
2018-08-24T13:14:55Z
Peter
1
wikitext
text/x-wiki
Levenberg-Marquardt is a popular non-linear optimization algorithm. This example demonstrate how a basic implementation of Levenberg-Marquardt can be created using EJML's [[Procedural|procedural]] interface. Unnecessary allocation of new memory is avoided by reshaping matrices. When a matrix is reshaped its width and height is changed but new memory is not declared unless the new shape requires more memory than is available.
The algorithm is provided a function, set of inputs, set of outputs, and an initial estimate of the parameters (this often works with all zeros). It finds the parameters that minimize the difference between the computed output and the observed output. A numerical Jacobian is used to estimate the function's gradient.
'''Note:''' This is a simple straight forward implementation of Levenberg-Marquardt and is not as robust as Minpack's implementation. If you are looking for a robust non-linear least-squares minimization library in Java check out [http://ddogleg.org DDogleg].
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.31/examples/src/org/ejml/example/LevenbergMarquardt.java LevenbergMarquardt.java code]
* <disqus>Discuss this example</disqus>
== Example Code ==
<syntaxhighlight lang="java">
/**
* <p>
* This is a straight forward implementation of the Levenberg-Marquardt (LM) algorithm. LM is used to minimize
* non-linear cost functions:<br>
* <br>
* S(P) = Sum{ i=1:m , [y<sub>i</sub> - f(x<sub>i</sub>,P)]<sup>2</sup>}<br>
* <br>
* where P is the set of parameters being optimized.
* </p>
*
* <p>
* In each iteration the parameters are updated using the following equations:<br>
* <br>
* P<sub>i+1</sub> = (H + λ I)<sup>-1</sup> d <br>
* d = (1/N) Sum{ i=1..N , (f(x<sub>i</sub>;P<sub>i</sub>) - y<sub>i</sub>) * jacobian(:,i) } <br>
* H = (1/N) Sum{ i=1..N , jacobian(:,i) * jacobian(:,i)<sup>T</sup> }
* </p>
* <p>
* Whenever possible the allocation of new memory is avoided. This is accomplished by reshaping matrices.
* A matrix that is reshaped won't grow unless the new shape requires more memory than it has available.
* </p>
* @author Peter Abeles
*/
public class LevenbergMarquardt {
// Convergence criteria
private int maxIterations = 100;
private double ftol = 1e-12;
private double gtol = 1e-12;
// how much the numerical jacobian calculation perturbs the parameters by.
// In better implementation there are better ways to compute this delta. See Numerical Recipes.
private final static double DELTA = 1e-8;
// Dampening. Larger values means it's more like gradient descent
private double initialLambda;
// the function that is optimized
private ResidualFunction function;
// the optimized parameters and associated costs
private DMatrixRMaj candidateParameters = new DMatrixRMaj(1,1);
private double initialCost;
private double finalCost;
// used by matrix operations
private DMatrixRMaj g = new DMatrixRMaj(1,1); // gradient
private DMatrixRMaj H = new DMatrixRMaj(1,1); // Hessian approximation
private DMatrixRMaj Hdiag = new DMatrixRMaj(1,1);
private DMatrixRMaj negativeStep = new DMatrixRMaj(1,1);
// variables used by the numerical jacobian algorithm
private DMatrixRMaj temp0 = new DMatrixRMaj(1,1);
private DMatrixRMaj temp1 = new DMatrixRMaj(1,1);
// used when computing d and H variables
private DMatrixRMaj residuals = new DMatrixRMaj(1,1);
// Where the numerical Jacobian is stored.
private DMatrixRMaj jacobian = new DMatrixRMaj(1,1);
public double getInitialCost() {
return initialCost;
}
public double getFinalCost() {
return finalCost;
}
/**
*
* @param initialLambda Initial value of dampening parameter. Try 1 to start
*/
public LevenbergMarquardt(double initialLambda) {
this.initialLambda = initialLambda;
}
/**
* Specifies convergence criteria
*
* @param maxIterations Maximum number of iterations
* @param ftol convergence based on change in function value. try 1e-12
* @param gtol convergence based on residual magnitude. Try 1e-12
*/
public void setConvergence( int maxIterations , double ftol , double gtol ) {
this.maxIterations = maxIterations;
this.ftol = ftol;
this.gtol = gtol;
}
/**
* Finds the best fit parameters.
*
* @param function The function being optimized
* @param parameters (Input/Output) initial parameter estimate and storage for optimized parameters
* @return true if it succeeded and false if it did not.
*/
public boolean optimize(ResidualFunction function, DMatrixRMaj parameters )
{
configure(function,parameters.getNumElements());
// save the cost of the initial parameters so that it knows if it improves or not
double previousCost = initialCost = cost(parameters);
// iterate until the difference between the costs is insignificant
double lambda = initialLambda;
// if it should recompute the Jacobian in this iteration or not
boolean computeHessian = true;
for( int iter = 0; iter < maxIterations; iter++ ) {
if( computeHessian ) {
// compute some variables based on the gradient
computeGradientAndHessian(parameters);
computeHessian = false;
// check for convergence using gradient test
boolean converged = true;
for (int i = 0; i < g.getNumElements(); i++) {
if( Math.abs(g.data[i]) > gtol ) {
converged = false;
break;
}
}
if( converged )
return true;
}
// H = H + lambda*I
for (int i = 0; i < H.numRows; i++) {
H.set(i,i, Hdiag.get(i) + lambda);
}
// In robust implementations failure to solve is handled much better
if( !CommonOps_DDRM.solve(H, g, negativeStep) ) {
return false;
}
// compute the candidate parameters
CommonOps_DDRM.subtract(parameters, negativeStep, candidateParameters);
double cost = cost(candidateParameters);
if( cost <= previousCost ) {
// the candidate parameters produced better results so use it
computeHessian = true;
parameters.set(candidateParameters);
// check for convergence
// ftol <= (cost(k) - cost(k+1))/cost(k)
boolean converged = ftol*previousCost >= previousCost-cost;
previousCost = cost;
lambda /= 10.0;
if( converged ) {
return true;
}
} else {
lambda *= 10.0;
}
}
finalCost = previousCost;
return true;
}
/**
* Performs sanity checks on the input data and reshapes internal matrices. By reshaping
* a matrix it will only declare new memory when needed.
*/
protected void configure(ResidualFunction function , int numParam )
{
this.function = function;
int numFunctions = function.numFunctions();
// reshaping a matrix means that new memory is only declared when needed
candidateParameters.reshape(numParam,1);
g.reshape(numParam,1);
H.reshape(numParam,numParam);
negativeStep.reshape(numParam,1);
// Normally these variables are thought of as row vectors, but it works out easier if they are column
temp0.reshape(numFunctions,1);
temp1.reshape(numFunctions,1);
residuals.reshape(numFunctions,1);
jacobian.reshape(numFunctions,numParam);
}
/**
* Computes the d and H parameters.
*
* d = J'*(f(x)-y) <--- that's also the gradient
* H = J'*J
*/
private void computeGradientAndHessian(DMatrixRMaj param )
{
// residuals = f(x) - y
function.compute(param, residuals);
computeNumericalJacobian(param,jacobian);
CommonOps_DDRM.multTransA(jacobian, residuals, g);
CommonOps_DDRM.multTransA(jacobian, jacobian, H);
CommonOps_DDRM.extractDiag(H,Hdiag);
}
/**
* Computes the "cost" for the parameters given.
*
* cost = (1/N) Sum (f(x) - y)^2
*/
private double cost(DMatrixRMaj param )
{
function.compute(param, residuals);
double error = NormOps_DDRM.normF(residuals);
return error*error / (double)residuals.numRows;
}
/**
* Computes a simple numerical Jacobian.
*
* @param param (input) The set of parameters that the Jacobian is to be computed at.
* @param jacobian (output) Where the jacobian will be stored
*/
protected void computeNumericalJacobian( DMatrixRMaj param ,
DMatrixRMaj jacobian )
{
double invDelta = 1.0/DELTA;
function.compute(param, temp0);
// compute the jacobian by perturbing the parameters slightly
// then seeing how it effects the results.
for( int i = 0; i < param.getNumElements(); i++ ) {
param.data[i] += DELTA;
function.compute(param, temp1);
// compute the difference between the two parameters and divide by the delta
// temp1 = (temp1 - temp0)/delta
CommonOps_DDRM.add(invDelta,temp1,-invDelta,temp0,temp1);
// copy the results into the jacobian matrix
// J(i,:) = temp1
CommonOps_DDRM.insert(temp1,jacobian,0,i);
param.data[i] -= DELTA;
}
}
/**
* The function that is being optimized. Returns the residual. f(x) - y
*/
public interface ResidualFunction {
/**
* Computes the residual vector given the set of input parameters
* Function which goes from N input to M outputs
*
* @param param (Input) N by 1 parameter vector
* @param residual (Output) M by 1 output vector to store the residual = f(x)-y
*/
void compute(DMatrixRMaj param , DMatrixRMaj residual );
/**
* Number of functions in output
* @return function count
*/
int numFunctions();
}
}
</syntaxhighlight>
6462b9005471afd3fda5f00b925b6146ba22ae3a
261
260
2018-08-24T13:17:57Z
Peter
1
wikitext
text/x-wiki
Levenberg-Marquardt (LM) is a popular non-linear optimization algorithm. This example demonstrate how a basic implementation of Levenberg-Marquardt can be created using EJML's [[Procedural|procedural]] interface. Unnecessary allocation of new memory is avoided by reshaping matrices. When a matrix is reshaped its width and height is changed but new memory is not declared unless the new shape requires more memory than is available.
LM works by being provided a function which computes the residual error. Residual error is defined has the difference between the predicted output and the actual observed output, e.g. f(x)-y. Optimization works
by finding a set of parameters which minimize the magnitude of the residuals based on the F2-norm.
'''Note:''' This is a simple straight forward implementation of Levenberg-Marquardt and is not as robust as Minpack's implementation. If you are looking for a robust non-linear least-squares minimization library in Java check out [http://ddogleg.org DDogleg].
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.35/examples/src/org/ejml/example/LevenbergMarquardt.java LevenbergMarquardt.java code]
* <disqus>Discuss this example</disqus>
== Example Code ==
<syntaxhighlight lang="java">
/**
* <p>
* This is a straight forward implementation of the Levenberg-Marquardt (LM) algorithm. LM is used to minimize
* non-linear cost functions:<br>
* <br>
* S(P) = Sum{ i=1:m , [y<sub>i</sub> - f(x<sub>i</sub>,P)]<sup>2</sup>}<br>
* <br>
* where P is the set of parameters being optimized.
* </p>
*
* <p>
* In each iteration the parameters are updated using the following equations:<br>
* <br>
* P<sub>i+1</sub> = (H + λ I)<sup>-1</sup> d <br>
* d = (1/N) Sum{ i=1..N , (f(x<sub>i</sub>;P<sub>i</sub>) - y<sub>i</sub>) * jacobian(:,i) } <br>
* H = (1/N) Sum{ i=1..N , jacobian(:,i) * jacobian(:,i)<sup>T</sup> }
* </p>
* <p>
* Whenever possible the allocation of new memory is avoided. This is accomplished by reshaping matrices.
* A matrix that is reshaped won't grow unless the new shape requires more memory than it has available.
* </p>
* @author Peter Abeles
*/
public class LevenbergMarquardt {
// Convergence criteria
private int maxIterations = 100;
private double ftol = 1e-12;
private double gtol = 1e-12;
// how much the numerical jacobian calculation perturbs the parameters by.
// In better implementation there are better ways to compute this delta. See Numerical Recipes.
private final static double DELTA = 1e-8;
// Dampening. Larger values means it's more like gradient descent
private double initialLambda;
// the function that is optimized
private ResidualFunction function;
// the optimized parameters and associated costs
private DMatrixRMaj candidateParameters = new DMatrixRMaj(1,1);
private double initialCost;
private double finalCost;
// used by matrix operations
private DMatrixRMaj g = new DMatrixRMaj(1,1); // gradient
private DMatrixRMaj H = new DMatrixRMaj(1,1); // Hessian approximation
private DMatrixRMaj Hdiag = new DMatrixRMaj(1,1);
private DMatrixRMaj negativeStep = new DMatrixRMaj(1,1);
// variables used by the numerical jacobian algorithm
private DMatrixRMaj temp0 = new DMatrixRMaj(1,1);
private DMatrixRMaj temp1 = new DMatrixRMaj(1,1);
// used when computing d and H variables
private DMatrixRMaj residuals = new DMatrixRMaj(1,1);
// Where the numerical Jacobian is stored.
private DMatrixRMaj jacobian = new DMatrixRMaj(1,1);
public double getInitialCost() {
return initialCost;
}
public double getFinalCost() {
return finalCost;
}
/**
*
* @param initialLambda Initial value of dampening parameter. Try 1 to start
*/
public LevenbergMarquardt(double initialLambda) {
this.initialLambda = initialLambda;
}
/**
* Specifies convergence criteria
*
* @param maxIterations Maximum number of iterations
* @param ftol convergence based on change in function value. try 1e-12
* @param gtol convergence based on residual magnitude. Try 1e-12
*/
public void setConvergence( int maxIterations , double ftol , double gtol ) {
this.maxIterations = maxIterations;
this.ftol = ftol;
this.gtol = gtol;
}
/**
* Finds the best fit parameters.
*
* @param function The function being optimized
* @param parameters (Input/Output) initial parameter estimate and storage for optimized parameters
* @return true if it succeeded and false if it did not.
*/
public boolean optimize(ResidualFunction function, DMatrixRMaj parameters )
{
configure(function,parameters.getNumElements());
// save the cost of the initial parameters so that it knows if it improves or not
double previousCost = initialCost = cost(parameters);
// iterate until the difference between the costs is insignificant
double lambda = initialLambda;
// if it should recompute the Jacobian in this iteration or not
boolean computeHessian = true;
for( int iter = 0; iter < maxIterations; iter++ ) {
if( computeHessian ) {
// compute some variables based on the gradient
computeGradientAndHessian(parameters);
computeHessian = false;
// check for convergence using gradient test
boolean converged = true;
for (int i = 0; i < g.getNumElements(); i++) {
if( Math.abs(g.data[i]) > gtol ) {
converged = false;
break;
}
}
if( converged )
return true;
}
// H = H + lambda*I
for (int i = 0; i < H.numRows; i++) {
H.set(i,i, Hdiag.get(i) + lambda);
}
// In robust implementations failure to solve is handled much better
if( !CommonOps_DDRM.solve(H, g, negativeStep) ) {
return false;
}
// compute the candidate parameters
CommonOps_DDRM.subtract(parameters, negativeStep, candidateParameters);
double cost = cost(candidateParameters);
if( cost <= previousCost ) {
// the candidate parameters produced better results so use it
computeHessian = true;
parameters.set(candidateParameters);
// check for convergence
// ftol <= (cost(k) - cost(k+1))/cost(k)
boolean converged = ftol*previousCost >= previousCost-cost;
previousCost = cost;
lambda /= 10.0;
if( converged ) {
return true;
}
} else {
lambda *= 10.0;
}
}
finalCost = previousCost;
return true;
}
/**
* Performs sanity checks on the input data and reshapes internal matrices. By reshaping
* a matrix it will only declare new memory when needed.
*/
protected void configure(ResidualFunction function , int numParam )
{
this.function = function;
int numFunctions = function.numFunctions();
// reshaping a matrix means that new memory is only declared when needed
candidateParameters.reshape(numParam,1);
g.reshape(numParam,1);
H.reshape(numParam,numParam);
negativeStep.reshape(numParam,1);
// Normally these variables are thought of as row vectors, but it works out easier if they are column
temp0.reshape(numFunctions,1);
temp1.reshape(numFunctions,1);
residuals.reshape(numFunctions,1);
jacobian.reshape(numFunctions,numParam);
}
/**
* Computes the d and H parameters.
*
* d = J'*(f(x)-y) <--- that's also the gradient
* H = J'*J
*/
private void computeGradientAndHessian(DMatrixRMaj param )
{
// residuals = f(x) - y
function.compute(param, residuals);
computeNumericalJacobian(param,jacobian);
CommonOps_DDRM.multTransA(jacobian, residuals, g);
CommonOps_DDRM.multTransA(jacobian, jacobian, H);
CommonOps_DDRM.extractDiag(H,Hdiag);
}
/**
* Computes the "cost" for the parameters given.
*
* cost = (1/N) Sum (f(x) - y)^2
*/
private double cost(DMatrixRMaj param )
{
function.compute(param, residuals);
double error = NormOps_DDRM.normF(residuals);
return error*error / (double)residuals.numRows;
}
/**
* Computes a simple numerical Jacobian.
*
* @param param (input) The set of parameters that the Jacobian is to be computed at.
* @param jacobian (output) Where the jacobian will be stored
*/
protected void computeNumericalJacobian( DMatrixRMaj param ,
DMatrixRMaj jacobian )
{
double invDelta = 1.0/DELTA;
function.compute(param, temp0);
// compute the jacobian by perturbing the parameters slightly
// then seeing how it effects the results.
for( int i = 0; i < param.getNumElements(); i++ ) {
param.data[i] += DELTA;
function.compute(param, temp1);
// compute the difference between the two parameters and divide by the delta
// temp1 = (temp1 - temp0)/delta
CommonOps_DDRM.add(invDelta,temp1,-invDelta,temp0,temp1);
// copy the results into the jacobian matrix
// J(i,:) = temp1
CommonOps_DDRM.insert(temp1,jacobian,0,i);
param.data[i] -= DELTA;
}
}
/**
* The function that is being optimized. Returns the residual. f(x) - y
*/
public interface ResidualFunction {
/**
* Computes the residual vector given the set of input parameters
* Function which goes from N input to M outputs
*
* @param param (Input) N by 1 parameter vector
* @param residual (Output) M by 1 output vector to store the residual = f(x)-y
*/
void compute(DMatrixRMaj param , DMatrixRMaj residual );
/**
* Number of functions in output
* @return function count
*/
int numFunctions();
}
}
</syntaxhighlight>
1ca3e5dc7db67f5d1dd12888e1be6c5a248ff4e4
Download
0
6
263
257
2018-08-24T14:06:02Z
Peter
1
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
Current status of developmental code:
[https://travis-ci.org/lessthanoptimal/ejml https://api.travis-ci.org/lessthanoptimal/ejml.png]
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.35/ EJML Downloads]
== Gradle and Maven ==
EJML is broken up into several packages (see list below) and including each individually can be tedious. To include all the packages simply reference "all", as is shown below:
Gradle:
<syntaxhighlight lang="groovy">
compile group: 'org.ejml', name: 'ejml-all', version: '0.35'
</syntaxhighlight>
Maven:
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>ejml-all</artifactId>
<version>0.35</version>
</dependency>
</syntaxhighlight>
Individual modules:
{| class="wikitable"
! Module Name !! Description
|-
| ejml-all || All the modules
|-
| ejml-ddense || Dense Real Double Matrices
|-
| ejml-fdense || Dense Real Float Matrices
|-
| ejml-zdense || Dense Complex Double Matrices
|-
| ejml-cdense || Dense Complex Float Matrices
|-
| ejml-simple || SimpleMatrix and Equations
|-
| ejml-dsparse || Sparse Real Double Matrices
|}
d9c6568069d0b4c02d5ff3611f6a0e917328e617
267
263
2018-09-29T15:04:38Z
Peter
1
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
Current status of developmental code:
[https://travis-ci.org/lessthanoptimal/ejml https://api.travis-ci.org/lessthanoptimal/ejml.png]
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.36/ EJML Downloads]
== Gradle and Maven ==
EJML is broken up into several packages (see list below) and including each individually can be tedious. To include all the packages simply reference "all", as is shown below:
Gradle:
<syntaxhighlight lang="groovy">
compile group: 'org.ejml', name: 'ejml-all', version: '0.36'
</syntaxhighlight>
Maven:
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>ejml-all</artifactId>
<version>0.36</version>
</dependency>
</syntaxhighlight>
Individual modules:
{| class="wikitable"
! Module Name !! Description
|-
| ejml-all || All the modules
|-
| ejml-ddense || Dense Real Double Matrices
|-
| ejml-fdense || Dense Real Float Matrices
|-
| ejml-zdense || Dense Complex Double Matrices
|-
| ejml-cdense || Dense Complex Float Matrices
|-
| ejml-simple || SimpleMatrix and Equations
|-
| ejml-dsparse || Sparse Real Double Matrices
|}
c6e8d2a5e50b862dc2fd4337c5c146d36deb522b
269
267
2018-11-12T06:51:02Z
Peter
1
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
Current status of developmental code:
[https://travis-ci.org/lessthanoptimal/ejml https://api.travis-ci.org/lessthanoptimal/ejml.png]
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.37/ EJML Downloads]
== Gradle and Maven ==
EJML is broken up into several packages (see list below) and including each individually can be tedious. To include all the packages simply reference "all", as is shown below:
Gradle:
<syntaxhighlight lang="groovy">
compile group: 'org.ejml', name: 'ejml-all', version: '0.37'
</syntaxhighlight>
Maven:
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>ejml-all</artifactId>
<version>0.37</version>
</dependency>
</syntaxhighlight>
Individual modules:
{| class="wikitable"
! Module Name !! Description
|-
| ejml-all || All the modules
|-
| ejml-ddense || Dense Real Double Matrices
|-
| ejml-fdense || Dense Real Float Matrices
|-
| ejml-zdense || Dense Complex Double Matrices
|-
| ejml-cdense || Dense Complex Float Matrices
|-
| ejml-simple || SimpleMatrix and Equations
|-
| ejml-dsparse || Sparse Real Double Matrices
|}
e1503053e6d0dd1abf42e76b049e63a507416761
272
269
2018-12-26T03:23:09Z
Peter
1
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
Current status of developmental code:
[https://travis-ci.org/lessthanoptimal/ejml https://api.travis-ci.org/lessthanoptimal/ejml.png]
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.37.1/ EJML Downloads]
== Gradle and Maven ==
EJML is broken up into several packages (see list below) and including each individually can be tedious. To include all the packages simply reference "all", as is shown below:
Gradle:
<syntaxhighlight lang="groovy">
compile group: 'org.ejml', name: 'ejml-all', version: '0.37.1'
</syntaxhighlight>
Maven:
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>ejml-all</artifactId>
<version>0.37.1</version>
</dependency>
</syntaxhighlight>
Individual modules:
{| class="wikitable"
! Module Name !! Description
|-
| ejml-all || All the modules
|-
| ejml-ddense || Dense Real Double Matrices
|-
| ejml-fdense || Dense Real Float Matrices
|-
| ejml-zdense || Dense Complex Double Matrices
|-
| ejml-cdense || Dense Complex Float Matrices
|-
| ejml-simple || SimpleMatrix and Equations
|-
| ejml-dsparse || Sparse Real Double Matrices
|}
d6f7ba2f6c38dfe09b59cca52dcc1f63cf0456a6
274
272
2019-03-14T04:23:20Z
Peter
1
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
Current status of developmental code:
[https://travis-ci.org/lessthanoptimal/ejml https://api.travis-ci.org/lessthanoptimal/ejml.png]
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.38/ EJML Downloads]
== Gradle and Maven ==
EJML is broken up into several packages (see list below) and including each individually can be tedious. To include all the packages simply reference "all", as is shown below:
Gradle:
<syntaxhighlight lang="groovy">
compile group: 'org.ejml', name: 'ejml-all', version: '0.38'
</syntaxhighlight>
Maven:
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>ejml-all</artifactId>
<version>0.37.1</version>
</dependency>
</syntaxhighlight>
Individual modules:
{| class="wikitable"
! Module Name !! Description
|-
| ejml-all || All the modules
|-
| ejml-ddense || Dense Real Double Matrices
|-
| ejml-fdense || Dense Real Float Matrices
|-
| ejml-zdense || Dense Complex Double Matrices
|-
| ejml-cdense || Dense Complex Float Matrices
|-
| ejml-simple || SimpleMatrix and Equations
|-
| ejml-dsparse || Sparse Real Double Matrices
|}
2193605f85d990155767f8d863b71f0ff3cd8ff7
278
274
2020-04-07T01:20:57Z
Peter
1
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
Current status of developmental code:
[https://travis-ci.org/lessthanoptimal/ejml https://api.travis-ci.org/lessthanoptimal/ejml.png]
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.39/ EJML Downloads]
== Gradle and Maven ==
EJML is broken up into several packages (see list below) and including each individually can be tedious. To include all the packages simply reference "all", as is shown below:
Gradle:
<syntaxhighlight lang="groovy">
compile group: 'org.ejml', name: 'ejml-all', version: '0.39'
</syntaxhighlight>
Maven:
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>ejml-all</artifactId>
<version>0.39</version>
</dependency>
</syntaxhighlight>
Individual modules:
{| class="wikitable"
! Module Name !! Description
|-
| ejml-all || All the modules
|-
| ejml-ddense || Dense Real Double Matrices
|-
| ejml-fdense || Dense Real Float Matrices
|-
| ejml-zdense || Dense Complex Double Matrices
|-
| ejml-cdense || Dense Complex Float Matrices
|-
| ejml-simple || SimpleMatrix and Equations
|-
| ejml-dsparse || Sparse Real Double Matrices
|-
| ejml-fsparse || Sparse Real Float Matrices
|}
ecf63945062422fbc08882c04ed1745b65229429
293
278
2020-11-05T05:33:43Z
Peter
1
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
Current status of developmental code:
[https://travis-ci.org/lessthanoptimal/ejml https://api.travis-ci.org/lessthanoptimal/ejml.png]
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.40/ EJML Downloads]
== Gradle and Maven ==
EJML is broken up into several packages (see list below) and including each individually can be tedious. To include all the packages simply reference "all", as is shown below:
Gradle:
<syntaxhighlight lang="groovy">
compile group: 'org.ejml', name: 'ejml-all', version: '0.40'
</syntaxhighlight>
Maven:
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>ejml-all</artifactId>
<version>0.40</version>
</dependency>
</syntaxhighlight>
Individual modules:
{| class="wikitable"
! Module Name !! Description
|-
| ejml-all || All the modules
|-
| ejml-ddense || Dense Real Double Matrices
|-
| ejml-fdense || Dense Real Float Matrices
|-
| ejml-zdense || Dense Complex Double Matrices
|-
| ejml-cdense || Dense Complex Float Matrices
|-
| ejml-simple || SimpleMatrix and Equations
|-
| ejml-dsparse || Sparse Real Double Matrices
|-
| ejml-fsparse || Sparse Real Float Matrices
|}
0ec7cf06a71cfe779f4341cb389fb5d69c10f992
Manual
0
8
264
248
2018-09-19T16:03:02Z
Peter
1
wikitext
text/x-wiki
= The Basics =
Efficient Java Matrix Library (EJML) is a Java library for performing standard linear algebra operations on dense matrices. Typically the list of standard operations is divided up unto basic (addition, subtraction, multiplication, ...etc), decompositions (LU, QR, SVD, ... etc), and solving linear systems. A complete list of its core functionality can be found on the [[Capabilities]] page.
This manual describes how to use and develop an application using EJML. Other questions, like how to build or include it in your project, is provided in the list below. If you have a question which isn't answered or is confusion feel free to post a question on the message board! Instructions on how to use EJML is primarily done in this manual through example, see below. The examples are selected from common real-world problems, such as Kalman filters. Some times the same example is provided in three different formats using one of the three interfaces provided in EJML to help you understand the differences.
* [[Download|Download and Building]]
* [[Frequently Asked Questions|Frequently Asked Questions]]
* [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
EJML is compatible with Java 1.7 and beyond.
== The Interfaces ==
A primary design goal of EJML was to provide users the capability to write both highly optimized code and easy to read/write code. Since it's hard to do this with a single API BoofCV provides three different ways to interact with it.
* [[Procedural]]: You have full access to all of EJML's capabilities, can select individual algorithms, and almost complete control over memory. The downside is it feels a bit like you're programming in assembly and it's tedious to have that much control over memory.
* [[SimpleMatrix]]: An object oriented API that allows you to connect multiple operations together using a flow strategy, which is much easier to read and write. Limited subset of operations are supported and memory is constantly created and destroyed.
* [[Equations]]: Is a symbolic interface that allows you to manipulate matrices in a similar manor to Matlab/Octave. Can be precompiled and won't declare new memory if the input size doesn't change. It's a bit of a black box and the compiler isn't smart enough to pick the most efficient functions.
Example of compute the Kalman gain "K"
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
It's hard to say which interface is the best. If you are dealing with small matrices and need to write highly optimized code then ''Procedural'' is the way to go. For large matrices it doesn't really matter which one you use since the overhead is insignificant compared to the matrix operations. If you want to write something quickly then [[SimpleMatrix]] or [[Equations]] is the way to go. For those of you who are concerned about performance, I recommend coding it up first using SimpleMatrix or Equations then benchmarking to see if that code is a bottleneck. Much easier to debug that way.
[[Performance|Comparison of Interface Runtime Performance]]
= Tutorials =
* [[Matlab to EJML|Matlab to EJML]]
* [[Tutorial Complex|Complex]]
* [[Solving Linear Systems|Solving Linear Systems]]
* [[Matrix Decompositions|Matrix Decompositions]]
* [[Random matrices, Matrix Features, and Matrix Norms]]
* [[Extract and Insert|Extracting and Inserting submatrices and vectors]]
* [[Input and Output|Matrix Input/Output]]
* [[Unit Testing]]
= Example Code =
The follow are code examples of common linear algebra problems intended to demonstrate different parts of EJML. In the table below it indicates which interface or interfaces the example uses.
{| class="wikitable" border="1" |
! Name !! Procedural !! SimpleMatrix !! Equations
|-
| [[Example Kalman Filter|Kalman Filter]] || X || X || X
|-
| [[Example Sparse Matrices|Sparse Matrix Basics]] || X || ||
|-
| [[Example Levenberg-Marquardt|Levenberg-Marquardt]] || X || ||
|-
| [[Example Principal Component Analysis|Principal Component Analysis]] || X || ||
|-
| [[Example Polynomial Fitting|Polynomial Fitting]] || X || ||
|-
| [[Example Polynomial Roots|Polynomial Roots]] || X || ||
|-
| [[Example Customizing Equations|Customizing Equations]] || || || X
|-
| [[Example Customizing SimpleMatrix|Customizing SimpleMatrix]] || || X ||
|-
| [[Example Fixed Sized Matrices|Fixed Sized Matrices]] || X || ||
|-
| [[Example Complex Math|Complex Math]] || X || ||
|-
| [[Example Complex Matrices|Complex Matrices]] || X || ||
|}
= External References =
Want to learn more about how EJML works to write more effective code and employ more advanced techniques? Understand where EJML's logo comes from? The following books are recommended reading and made EJML's early development possible.
* Best introduction to the subject that balances clarity without sacrificing important implementation details:
** Fundamentals of Matrix Computations by David S. Watkins
* Classic reference book that tersely covers hundreds of algorithms
** Matrix Computations by G. Golub and C. Van Loan
* Direct Methods for Sparse Linear Systems by Timothy A. Davis
** Covers the sparse algorithms used in EJML
* Popular book on linear algebra
** Linear Algebra and Its Applications by Gilbert Strang
d96c4d1a9a6338a393bfd80e08d46dbda931bc46
266
264
2018-09-29T15:03:59Z
Peter
1
wikitext
text/x-wiki
= The Basics =
Efficient Java Matrix Library (EJML) is a Java library for performing standard linear algebra operations on dense matrices. Typically the list of standard operations is divided up unto basic (addition, subtraction, multiplication, ...etc), decompositions (LU, QR, SVD, ... etc), and solving linear systems. A complete list of its core functionality can be found on the [[Capabilities]] page.
This manual describes how to use and develop an application using EJML. Other questions, like how to build or include it in your project, is provided in the list below. If you have a question which isn't answered or is confusion feel free to post a question on the message board! Instructions on how to use EJML is primarily done in this manual through example, see below. The examples are selected from common real-world problems, such as Kalman filters. Some times the same example is provided in three different formats using one of the three interfaces provided in EJML to help you understand the differences.
* [[Download|Download and Building]]
* [[Frequently Asked Questions|Frequently Asked Questions]]
* [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
EJML is compatible with Java 1.8 and beyond.
== The Interfaces ==
A primary design goal of EJML was to provide users the capability to write both highly optimized code and easy to read/write code. Since it's hard to do this with a single API BoofCV provides three different ways to interact with it.
* [[Procedural]]: You have full access to all of EJML's capabilities, can select individual algorithms, and almost complete control over memory. The downside is it feels a bit like you're programming in assembly and it's tedious to have that much control over memory.
* [[SimpleMatrix]]: An object oriented API that allows you to connect multiple operations together using a flow strategy, which is much easier to read and write. Limited subset of operations are supported and memory is constantly created and destroyed.
* [[Equations]]: Is a symbolic interface that allows you to manipulate matrices in a similar manor to Matlab/Octave. Can be precompiled and won't declare new memory if the input size doesn't change. It's a bit of a black box and the compiler isn't smart enough to pick the most efficient functions.
Example of compute the Kalman gain "K"
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
It's hard to say which interface is the best. If you are dealing with small matrices and need to write highly optimized code then ''Procedural'' is the way to go. For large matrices it doesn't really matter which one you use since the overhead is insignificant compared to the matrix operations. If you want to write something quickly then [[SimpleMatrix]] or [[Equations]] is the way to go. For those of you who are concerned about performance, I recommend coding it up first using SimpleMatrix or Equations then benchmarking to see if that code is a bottleneck. Much easier to debug that way.
[[Performance|Comparison of Interface Runtime Performance]]
= Tutorials =
* [[Matlab to EJML|Matlab to EJML]]
* [[Tutorial Complex|Complex]]
* [[Solving Linear Systems|Solving Linear Systems]]
* [[Matrix Decompositions|Matrix Decompositions]]
* [[Random matrices, Matrix Features, and Matrix Norms]]
* [[Extract and Insert|Extracting and Inserting submatrices and vectors]]
* [[Input and Output|Matrix Input/Output]]
* [[Unit Testing]]
= Example Code =
The follow are code examples of common linear algebra problems intended to demonstrate different parts of EJML. In the table below it indicates which interface or interfaces the example uses.
{| class="wikitable" border="1" |
! Name !! Procedural !! SimpleMatrix !! Equations
|-
| [[Example Kalman Filter|Kalman Filter]] || X || X || X
|-
| [[Example Sparse Matrices|Sparse Matrix Basics]] || X || ||
|-
| [[Example Levenberg-Marquardt|Levenberg-Marquardt]] || X || ||
|-
| [[Example Principal Component Analysis|Principal Component Analysis]] || X || ||
|-
| [[Example Polynomial Fitting|Polynomial Fitting]] || X || ||
|-
| [[Example Polynomial Roots|Polynomial Roots]] || X || ||
|-
| [[Example Customizing Equations|Customizing Equations]] || || || X
|-
| [[Example Customizing SimpleMatrix|Customizing SimpleMatrix]] || || X ||
|-
| [[Example Fixed Sized Matrices|Fixed Sized Matrices]] || X || ||
|-
| [[Example Complex Math|Complex Math]] || X || ||
|-
| [[Example Complex Matrices|Complex Matrices]] || X || ||
|}
= External References =
Want to learn more about how EJML works to write more effective code and employ more advanced techniques? Understand where EJML's logo comes from? The following books are recommended reading and made EJML's early development possible.
* Best introduction to the subject that balances clarity without sacrificing important implementation details:
** Fundamentals of Matrix Computations by David S. Watkins
* Classic reference book that tersely covers hundreds of algorithms
** Matrix Computations by G. Golub and C. Van Loan
* Direct Methods for Sparse Linear Systems by Timothy A. Davis
** Covers the sparse algorithms used in EJML
* Popular book on linear algebra
** Linear Algebra and Its Applications by Gilbert Strang
df5a01d7adc5c269a4f156d1224a294cd83c629b
286
266
2020-11-05T05:11:05Z
Peter
1
/* Example Code */
wikitext
text/x-wiki
= The Basics =
Efficient Java Matrix Library (EJML) is a Java library for performing standard linear algebra operations on dense matrices. Typically the list of standard operations is divided up unto basic (addition, subtraction, multiplication, ...etc), decompositions (LU, QR, SVD, ... etc), and solving linear systems. A complete list of its core functionality can be found on the [[Capabilities]] page.
This manual describes how to use and develop an application using EJML. Other questions, like how to build or include it in your project, is provided in the list below. If you have a question which isn't answered or is confusion feel free to post a question on the message board! Instructions on how to use EJML is primarily done in this manual through example, see below. The examples are selected from common real-world problems, such as Kalman filters. Some times the same example is provided in three different formats using one of the three interfaces provided in EJML to help you understand the differences.
* [[Download|Download and Building]]
* [[Frequently Asked Questions|Frequently Asked Questions]]
* [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
EJML is compatible with Java 1.8 and beyond.
== The Interfaces ==
A primary design goal of EJML was to provide users the capability to write both highly optimized code and easy to read/write code. Since it's hard to do this with a single API BoofCV provides three different ways to interact with it.
* [[Procedural]]: You have full access to all of EJML's capabilities, can select individual algorithms, and almost complete control over memory. The downside is it feels a bit like you're programming in assembly and it's tedious to have that much control over memory.
* [[SimpleMatrix]]: An object oriented API that allows you to connect multiple operations together using a flow strategy, which is much easier to read and write. Limited subset of operations are supported and memory is constantly created and destroyed.
* [[Equations]]: Is a symbolic interface that allows you to manipulate matrices in a similar manor to Matlab/Octave. Can be precompiled and won't declare new memory if the input size doesn't change. It's a bit of a black box and the compiler isn't smart enough to pick the most efficient functions.
Example of compute the Kalman gain "K"
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
It's hard to say which interface is the best. If you are dealing with small matrices and need to write highly optimized code then ''Procedural'' is the way to go. For large matrices it doesn't really matter which one you use since the overhead is insignificant compared to the matrix operations. If you want to write something quickly then [[SimpleMatrix]] or [[Equations]] is the way to go. For those of you who are concerned about performance, I recommend coding it up first using SimpleMatrix or Equations then benchmarking to see if that code is a bottleneck. Much easier to debug that way.
[[Performance|Comparison of Interface Runtime Performance]]
= Tutorials =
* [[Matlab to EJML|Matlab to EJML]]
* [[Tutorial Complex|Complex]]
* [[Solving Linear Systems|Solving Linear Systems]]
* [[Matrix Decompositions|Matrix Decompositions]]
* [[Random matrices, Matrix Features, and Matrix Norms]]
* [[Extract and Insert|Extracting and Inserting submatrices and vectors]]
* [[Input and Output|Matrix Input/Output]]
* [[Unit Testing]]
= Example Code =
The follow are code examples of common linear algebra problems intended to demonstrate different parts of EJML. In the table below it indicates which interface or interfaces the example uses.
{| class="wikitable" border="1" |
! Name !! Procedural !! SimpleMatrix !! Equations
|-
| [[Example Kalman Filter|Kalman Filter]] || X || X || X
|-
| [[Example Sparse Matrices|Sparse Matrix Basics]] || X || ||
|-
| [[Example Levenberg-Marquardt|Levenberg-Marquardt]] || X || ||
|-
| [[Example Principal Component Analysis|Principal Component Analysis]] || X || ||
|-
| [[Example Polynomial Fitting|Polynomial Fitting]] || X || ||
|-
| [[Example Polynomial Roots|Polynomial Roots]] || X || ||
|-
| [[Example Customizing Equations|Customizing Equations]] || || || X
|-
| [[Example Customizing SimpleMatrix|Customizing SimpleMatrix]] || || X ||
|-
| [[Example Fixed Sized Matrices|Fixed Sized Matrices]] || X || ||
|-
| [[Example Complex Math|Complex Math]] || X || ||
|-
| [[Example Complex Matrices|Complex Matrices]] || X || ||
|-
| [[Example Concurrent Operations|Concurrent Operations]] || X || ||
|-
| [[Example Graph Paths|Graph Paths]] || X || ||
|-
| [[Example Large Dense Matrices|Optimizing Large Dense]] || X || ||
|}
= External References =
Want to learn more about how EJML works to write more effective code and employ more advanced techniques? Understand where EJML's logo comes from? The following books are recommended reading and made EJML's early development possible.
* Best introduction to the subject that balances clarity without sacrificing important implementation details:
** Fundamentals of Matrix Computations by David S. Watkins
* Classic reference book that tersely covers hundreds of algorithms
** Matrix Computations by G. Golub and C. Van Loan
* Direct Methods for Sparse Linear Systems by Timothy A. Davis
** Covers the sparse algorithms used in EJML
* Popular book on linear algebra
** Linear Algebra and Its Applications by Gilbert Strang
5677847d247489e159e9d319aef6251a002a87ad
Example Sparse Matrices
0
60
273
249
2019-03-14T04:16:06Z
Peter
1
wikitext
text/x-wiki
Support for sparse matrices has recently been added to EJML. It supports many but not all of the standard operations that are supported for dense matrics. The code below shows the basics of working with a sparse matrix. In some situations the speed improvement of using a sparse matrix can be substantial. Do note that if your system isn't sparse enough or if its structure isn't advantageous it could run even slower using sparse operations!
<center>
{| class="wikitable"
! Type !! Execution Time (ms)
|-
| Dense || 12660
|-
| Sparse || 1642
|}
</center>
== Sparse Matrix Example ==
<syntaxhighlight lang="java">
/**
* Example showing how to construct and solve a linear system using sparse matrices
*
* @author Peter Abeles
*/
public class ExampleSparseMatrix {
public static int ROWS = 100000;
public static int COLS = 1000;
public static int XCOLS = 1;
public static void main(String[] args) {
Random rand = new Random(234);
// easy to work with sparse format, but hard to do computations with
// NOTE: It is very important to you set 'initLength' to the actual number of elements in the final array
// If you don't it will be forced to thrash memory as it grows its internal data structures.
// Failure to heed this advice will make construction of large matrices 4x slower and use 2x more memory
DMatrixSparseTriplet work = new DMatrixSparseTriplet(5,4,5);
work.addItem(0,1,1.2);
work.addItem(3,0,3);
work.addItem(1,1,22.21234);
work.addItem(2,3,6);
// convert into a format that's easier to perform math with
DMatrixSparseCSC Z = ConvertDMatrixStruct.convert(work,(DMatrixSparseCSC)null);
// print the matrix to standard out in two different formats
Z.print();
System.out.println();
Z.printNonZero();
System.out.println();
// Create a large matrix that is 5% filled
DMatrixSparseCSC A = RandomMatrices_DSCC.rectangle(ROWS,COLS,(int)(ROWS*COLS*0.05),rand);
// large vector that is 70% filled
DMatrixSparseCSC x = RandomMatrices_DSCC.rectangle(COLS,XCOLS,(int)(XCOLS*COLS*0.7),rand);
System.out.println("Done generating random matrices");
// storage for the initial solution
DMatrixSparseCSC y = new DMatrixSparseCSC(ROWS,XCOLS,0);
DMatrixSparseCSC z = new DMatrixSparseCSC(ROWS,XCOLS,0);
// To demonstration how to perform sparse math let's multiply:
// y=A*x
// Optional storage is set to null so that it will declare it internally
long before = System.currentTimeMillis();
IGrowArray workA = new IGrowArray(A.numRows);
DGrowArray workB = new DGrowArray(A.numRows);
for (int i = 0; i < 100; i++) {
CommonOps_DSCC.mult(A,x,y,workA,workB);
CommonOps_DSCC.add(1.5,y,0.75,y,z,workA,workB);
}
long after = System.currentTimeMillis();
System.out.println("norm = "+ NormOps_DSCC.fastNormF(y)+" sparse time = "+(after-before)+" ms");
DMatrixRMaj Ad = ConvertDMatrixStruct.convert(A,(DMatrixRMaj)null);
DMatrixRMaj xd = ConvertDMatrixStruct.convert(x,(DMatrixRMaj)null);
DMatrixRMaj yd = new DMatrixRMaj(y.numRows,y.numCols);
DMatrixRMaj zd = new DMatrixRMaj(y.numRows,y.numCols);
before = System.currentTimeMillis();
for (int i = 0; i < 100; i++) {
CommonOps_DDRM.mult(Ad, xd, yd);
CommonOps_DDRM.add(1.5,yd,0.75, yd, zd);
}
after = System.currentTimeMillis();
System.out.println("norm = "+ NormOps_DDRM.fastNormF(yd)+" dense time = "+(after-before)+" ms");
}
}
</syntaxhighlight>
cc7d8128f2640feb0e96086c7ec530a844920a3c
284
273
2020-11-05T04:58:44Z
Peter
1
/* Sparse Matrix Example */
wikitext
text/x-wiki
Support for sparse matrices has recently been added to EJML. It supports many but not all of the standard operations that are supported for dense matrics. The code below shows the basics of working with a sparse matrix. In some situations the speed improvement of using a sparse matrix can be substantial. Do note that if your system isn't sparse enough or if its structure isn't advantageous it could run even slower using sparse operations!
<center>
{| class="wikitable"
! Type !! Execution Time (ms)
|-
| Dense || 12660
|-
| Sparse || 1642
|}
</center>
== Sparse Matrix Example ==
<syntaxhighlight lang="java">
/**
* Example showing how to construct and solve a linear system using sparse matrices
*
* @author Peter Abeles
*/
public class ExampleSparseMatrix {
public static int ROWS = 100000;
public static int COLS = 1000;
public static int XCOLS = 1;
public static void main(String[] args) {
Random rand = new Random(234);
// easy to work with sparse format, but hard to do computations with
// NOTE: It is very important to you set 'initLength' to the actual number of elements in the final array
// If you don't it will be forced to thrash memory as it grows its internal data structures.
// Failure to heed this advice will make construction of large matrices 4x slower and use 2x more memory
DMatrixSparseTriplet work = new DMatrixSparseTriplet(5,4,5);
work.addItem(0,1,1.2);
work.addItem(3,0,3);
work.addItem(1,1,22.21234);
work.addItem(2,3,6);
// convert into a format that's easier to perform math with
DMatrixSparseCSC Z = DConvertMatrixStruct.convert(work,(DMatrixSparseCSC)null);
// print the matrix to standard out in two different formats
Z.print();
System.out.println();
Z.printNonZero();
System.out.println();
// Create a large matrix that is 5% filled
DMatrixSparseCSC A = RandomMatrices_DSCC.rectangle(ROWS,COLS,(int)(ROWS*COLS*0.05),rand);
// large vector that is 70% filled
DMatrixSparseCSC x = RandomMatrices_DSCC.rectangle(COLS,XCOLS,(int)(XCOLS*COLS*0.7),rand);
System.out.println("Done generating random matrices");
// storage for the initial solution
DMatrixSparseCSC y = new DMatrixSparseCSC(ROWS,XCOLS,0);
DMatrixSparseCSC z = new DMatrixSparseCSC(ROWS,XCOLS,0);
// To demonstration how to perform sparse math let's multiply:
// y=A*x
// Optional storage is set to null so that it will declare it internally
long before = System.currentTimeMillis();
IGrowArray workA = new IGrowArray(A.numRows);
DGrowArray workB = new DGrowArray(A.numRows);
for (int i = 0; i < 100; i++) {
CommonOps_DSCC.mult(A,x,y,workA,workB);
CommonOps_DSCC.add(1.5,y,0.75,y,z,workA,workB);
}
long after = System.currentTimeMillis();
System.out.println("norm = "+ NormOps_DSCC.fastNormF(y)+" sparse time = "+(after-before)+" ms");
DMatrixRMaj Ad = DConvertMatrixStruct.convert(A,(DMatrixRMaj)null);
DMatrixRMaj xd = DConvertMatrixStruct.convert(x,(DMatrixRMaj)null);
DMatrixRMaj yd = new DMatrixRMaj(y.numRows,y.numCols);
DMatrixRMaj zd = new DMatrixRMaj(y.numRows,y.numCols);
before = System.currentTimeMillis();
for (int i = 0; i < 100; i++) {
CommonOps_DDRM.mult(Ad, xd, yd);
CommonOps_DDRM.add(1.5,yd,0.75, yd, zd);
}
after = System.currentTimeMillis();
System.out.println("norm = "+ NormOps_DDRM.fastNormF(yd)+" dense time = "+(after-before)+" ms");
}
}
</syntaxhighlight>
44205e5271841880d200b24082cad2c154e05e8d
Kotlin
0
61
280
2020-10-29T13:41:12Z
Peter
1
Created page with "= EJML in Kotlin! = EJML works just fine when used in the Kotlin JVM environment. EJML also provides specialized Kotlin support in the form of Kotlin extensions."
wikitext
text/x-wiki
= EJML in Kotlin! =
EJML works just fine when used in the Kotlin JVM environment. EJML also provides specialized Kotlin support in the form of Kotlin extensions.
73601497b0a969d770128176f352b34ed978721b
282
280
2020-10-29T13:47:43Z
Peter
1
wikitext
text/x-wiki
= EJML in Kotlin! =
EJML works just fine when used in the Kotlin JVM environment. EJML also provides specialized Kotlin support in the form of Kotlin extensions. A complete list of extensions can be found on [https://github.com/lessthanoptimal/ejml/blob/SNAPSHOT/main/ejml-kotlin/src/Extensions_F64.kt Github]. This is still considered a preview feature. Suggestions and pull requests to improve the Kotlin support are most welcomed!
Kotlin
<syntaxhighlight lang="kotlin">
val c = H*P
val S = multTransB(c,H,null)
S += R
S_inv = S.invert()
multTransA(H,S_inv,d);
K = P*D
</syntaxhighlight>
Java
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
1c8e244c706252bf8bd458f5281d73be663892cd
Example Fixed Sized Matrices
0
17
283
241
2020-11-05T04:57:45Z
Peter
1
wikitext
text/x-wiki
Array access adds a significant amount of overhead to matrix operations. A fixed sized matrix gets around that issue by having each element in the matrix be a variable in the class. EJML provides support for fixed sized matrices and vectors up to 6x6, at which point it loses its advantage. The example below demonstrates how to use a fixed sized matrix and convert to other matrix types in EJML.
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.31/examples/src/org/ejml/example/ExampleFixedSizedMatrix.java ExampleFixedSizedMatrix]
* <disqus>Discuss this example</disqus>
== Example ==
<syntaxhighlight lang="java">
/**
* In some applications a small fixed sized matrix can speed things up a lot, e.g. 8 times faster. One application
* which uses small matrices is graphics and rigid body motion, which extensively uses 3x3 and 4x4 matrices. This
* example is to show some examples of how you can use a fixed sized matrix.
*
* @author Peter Abeles
*/
public class ExampleFixedSizedMatrix {
public static void main( String args[] ) {
// declare the matrix
DMatrix3x3 a = new DMatrix3x3();
DMatrix3x3 b = new DMatrix3x3();
// Can assign values the usual way
for( int i = 0; i < 3; i++ ) {
for( int j = 0; j < 3; j++ ) {
a.set(i,j,i+j+1);
}
}
// Direct manipulation of each value is the fastest way to assign/read values
a.a11 = 12;
a.a23 = 64;
// can print the usual way too
a.print();
// most of the standard operations are support
CommonOps_DDF3.transpose(a,b);
b.print();
System.out.println("Determinant = "+ CommonOps_DDF3.det(a));
// matrix-vector operations are also supported
// Constructors for vectors and matrices can be used to initialize its value
DMatrix3 v = new DMatrix3(1,2,3);
DMatrix3 result = new DMatrix3();
CommonOps_DDF3.mult(a,v,result);
// Conversion into DMatrixRMaj can also be done
DMatrixRMaj dm = DConvertMatrixStruct.convert(a,null);
dm.print();
// This can be useful if you need do more advanced operations
SimpleMatrix sv = SimpleMatrix.wrap(dm).svd().getV();
// can then convert it back into a fixed matrix
DMatrix3x3 fv = DConvertMatrixStruct.convert(sv.getDDRM(),(DMatrix3x3)null);
System.out.println("Original simple matrix and converted fixed matrix");
sv.print();
fv.print();
}
}
</syntaxhighlight>
05a6bb9f1c5925b1ecbb1356111bd4348e2a75ed
Example Kalman Filter
0
10
285
234
2020-11-05T05:08:06Z
Peter
1
wikitext
text/x-wiki
Here are three examples that demonstrate how a [http://en.wikipedia.org/wiki/Kalman_filter Kalman filter] can be created using different API's in EJML. Each API has different advantages and disadvantages. High level interfaces tend to be easier to use, but sacrifice efficiency. The intent of this article is to illustrate this trend empirically. Runtime performance of each approach is shown below. To see how complex and readable each approach is check out the source code below.
<center>
{| class="wikitable"
! API !! Execution Time (ms)
|-
| SimpleMatrix || 1875
|-
| Operations || 1280
|-
| Equations || 1698
|}
</center>
__TOC__
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.40/examples/src/org/ejml/example/KalmanFilterSimple.java KalmanFilterSimple]
* [https://github.com/lessthanoptimal/ejml/blob/v0.40/examples/src/org/ejml/example/KalmanFilterOperations.java KalmanFilterOperations]
* [https://github.com/lessthanoptimal/ejml/blob/v0.40/examples/src/org/ejml/example/KalmanFilterEquation.java KalmanFilterEquation]
* <disqus>Discuss this example</disqus>
----
'''NOTE:''' While the Kalman filter code below is fully functional and will work well in most applications, it might not be the best. Other variants seek to improve stability and/or avoid the matrix inversion. It's worth point out that some people say you should never invert the matrix in a Kalman filter. There are applications, such as target tracking, where matrix inversion of the innovation covariance is helpful as a preprocessing step.
== SimpleMatrix Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter implemented using SimpleMatrix. The code tends to be easier to
* read and write, but the performance is degraded due to excessive creation/destruction of
* memory and the use of more generic algorithms. This also demonstrates how code can be
* seamlessly implemented using both SimpleMatrix and DMatrixRMaj. This allows code
* to be quickly prototyped or to be written either by novices or experts.
*
* @author Peter Abeles
*/
public class KalmanFilterSimple implements KalmanFilter{
// kinematics description
private SimpleMatrix F,Q,H;
// sytem state estimate
private SimpleMatrix x,P;
@Override public void configure(DMatrixRMaj F, DMatrixRMaj Q, DMatrixRMaj H) {
this.F = new SimpleMatrix(F);
this.Q = new SimpleMatrix(Q);
this.H = new SimpleMatrix(H);
}
@Override public void setState(DMatrixRMaj x, DMatrixRMaj P) {
this.x = new SimpleMatrix(x);
this.P = new SimpleMatrix(P);
}
@Override public void predict() {
// x = F x
x = F.mult(x);
// P = F P F' + Q
P = F.mult(P).mult(F.transpose()).plus(Q);
}
@Override public void update(DMatrixRMaj _z, DMatrixRMaj _R) {
// a fast way to make the matrices usable by SimpleMatrix
SimpleMatrix z = SimpleMatrix.wrap(_z);
SimpleMatrix R = SimpleMatrix.wrap(_R);
// y = z - H x
SimpleMatrix y = z.minus(H.mult(x));
// S = H P H' + R
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
// K = PH'S^(-1)
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
// x = x + Ky
x = x.plus(K.mult(y));
// P = (I-kH)P = P - KHP
P = P.minus(K.mult(H).mult(P));
}
@Override public DMatrixRMaj getState() { return x.getMatrix(); }
@Override public DMatrixRMaj getCovariance() { return P.getMatrix(); }
}
</syntaxhighlight>
== Operations Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter that is implemented using the operations API, which is procedural. Much of the excessive
* memory creation/destruction has been reduced from the KalmanFilterSimple. A specialized solver is
* under to invert the SPD matrix.
*
* @author Peter Abeles
*/
public class KalmanFilterOperations implements KalmanFilter{
// kinematics description
private DMatrixRMaj F,Q,H;
// system state estimate
private DMatrixRMaj x,P;
// these are predeclared for efficiency reasons
private DMatrixRMaj a,b;
private DMatrixRMaj y,S,S_inv,c,d;
private DMatrixRMaj K;
private LinearSolverDense<DMatrixRMaj> solver;
@Override public void configure(DMatrixRMaj F, DMatrixRMaj Q, DMatrixRMaj H) {
this.F = F;
this.Q = Q;
this.H = H;
int dimenX = F.numCols;
int dimenZ = H.numRows;
a = new DMatrixRMaj(dimenX,1);
b = new DMatrixRMaj(dimenX,dimenX);
y = new DMatrixRMaj(dimenZ,1);
S = new DMatrixRMaj(dimenZ,dimenZ);
S_inv = new DMatrixRMaj(dimenZ,dimenZ);
c = new DMatrixRMaj(dimenZ,dimenX);
d = new DMatrixRMaj(dimenX,dimenZ);
K = new DMatrixRMaj(dimenX,dimenZ);
x = new DMatrixRMaj(dimenX,1);
P = new DMatrixRMaj(dimenX,dimenX);
// covariance matrices are symmetric positive semi-definite
solver = LinearSolverFactory_DDRM.symmPosDef(dimenX);
}
@Override public void setState(DMatrixRMaj x, DMatrixRMaj P) {
this.x.set(x);
this.P.set(P);
}
@Override public void predict() {
// x = F x
mult(F,x,a);
x.set(a);
// P = F P F' + Q
mult(F,P,b);
multTransB(b,F, P);
addEquals(P,Q);
}
@Override public void update(DMatrixRMaj z, DMatrixRMaj R) {
// y = z - H x
mult(H,x,y);
subtract(z, y, y);
// S = H P H' + R
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
// K = PH'S^(-1)
if( !solver.setA(S) ) throw new RuntimeException("Invert failed");
solver.invert(S_inv);
multTransA(H,S_inv,d);
mult(P,d,K);
// x = x + Ky
mult(K,y,a);
addEquals(x,a);
// P = (I-kH)P = P - (KH)P = P-K(HP)
mult(H,P,c);
mult(K,c,b);
subtractEquals(P, b);
}
@Override public DMatrixRMaj getState() { return x; }
@Override public DMatrixRMaj getCovariance() { return P; }
}
</syntaxhighlight>
== Equations Example ==
<syntaxhighlight lang="java">
/**
* Example of how the equation interface can greatly simplify code
*
* @author Peter Abeles
*/
public class KalmanFilterEquation implements KalmanFilter {
// system state estimate
private DMatrixRMaj x, P;
private Equation eq;
// Storage for precompiled code for predict and update
Sequence predictX, predictP;
Sequence updateY, updateK, updateX, updateP;
@Override public void configure( DMatrixRMaj F, DMatrixRMaj Q, DMatrixRMaj H ) {
int dimenX = F.numCols;
x = new DMatrixRMaj(dimenX, 1);
P = new DMatrixRMaj(dimenX, dimenX);
eq = new Equation();
// Provide aliases between the symbolic variables and matrices we normally interact with
// The names do not have to be the same.
eq.alias(x, "x", P, "P", Q, "Q", F, "F", H, "H");
// Dummy matrix place holder to avoid compiler errors. Will be replaced later on
eq.alias(new DMatrixRMaj(1, 1), "z");
eq.alias(new DMatrixRMaj(1, 1), "R");
// Pre-compile so that it doesn't have to compile it each time it's invoked. More cumbersome
// but for small matrices the overhead is significant
predictX = eq.compile("x = F*x");
predictP = eq.compile("P = F*P*F' + Q");
updateY = eq.compile("y = z - H*x");
updateK = eq.compile("K = P*H'*inv( H*P*H' + R )");
updateX = eq.compile("x = x + K*y");
updateP = eq.compile("P = P-K*(H*P)");
}
@Override public void setState( DMatrixRMaj x, DMatrixRMaj P ) {
this.x.set(x);
this.P.set(P);
}
@Override public void predict() {
predictX.perform();
predictP.perform();
}
@Override public void update( DMatrixRMaj z, DMatrixRMaj R ) {
// Alias will overwrite the reference to the previous matrices with the same name
eq.alias(z, "z",R, "R");
updateY.perform();
updateK.perform();
updateX.perform();
updateP.perform();
}
@Override public DMatrixRMaj getState() { return x; }
@Override public DMatrixRMaj getCovariance() { return P; }
}
</syntaxhighlight>
b63706ca64f4a8a86788bddea8bd3adc1edcadaf
Example Concurrent Operations
0
62
287
2020-11-05T05:15:49Z
Peter
1
Created page with "Concurrent or Mult Threaded operations are a relatively recent to EJML. EJML has traditionally been focused on single threaded performance but this recently changed when "low..."
wikitext
text/x-wiki
Concurrent or Mult Threaded operations are a relatively recent to EJML. EJML has traditionally been focused on single threaded performance but this recently changed when "low hanging fruit" has been converted into threaded code. Not all and in fact most operations don't have threaded variants yet and it is always possible to call code which is purely single threaded. See below for more details.
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.40/examples/src/org/ejml/example/ExampleConcurrent.java ExampleConcurrent.java code]
== Example Code ==
<syntaxhighlight lang="java">
/**
* Concurrent or multi-threaded algorithms are a recent addition to EJML. Classes with concurrent implementations
* can be identified with _MT_ in the class name. For example CommonOps_MT_DDRM will contain concurrent implementations
* of operations such as matrix multiplication for dense row-major algorithms. Not everything has a concurrent
* implementation yet and in some cases entirely new algorithms will need to be implemented.
*
* @author Peter Abeles
*/
public class ExampleConcurrent {
public static void main( String[] args ) {
// Create a few random matrices that we will multiply and decompose
var rand = new Random(0xBEEF);
DMatrixRMaj A = RandomMatrices_DDRM.rectangle(4000,4000,-1,1,rand);
DMatrixRMaj B = RandomMatrices_DDRM.rectangle(A.numCols,1000,-1,1,rand);
DMatrixRMaj C = new DMatrixRMaj(1,1);
// First do a concurrent matrix multiply using the default number of threads
System.out.println("Matrix Multiply, threads="+EjmlConcurrency.getMaxThreads());
long time0 = System.currentTimeMillis();
CommonOps_MT_DDRM.mult(A,B,C);
long time1 = System.currentTimeMillis();
System.out.println("Elapsed time "+(time1-time0)+" (ms)");
// Set it to two threads
EjmlConcurrency.setMaxThreads(2);
System.out.println("Matrix Multiply, threads="+EjmlConcurrency.getMaxThreads());
long time2 = System.currentTimeMillis();
CommonOps_MT_DDRM.mult(A,B,C);
long time3 = System.currentTimeMillis();
System.out.println("Elapsed time "+(time3-time2)+" (ms)");
// Then let's compare it against the single thread implementation
System.out.println("Matrix Multiply, Single Thread");
long time4 = System.currentTimeMillis();
CommonOps_DDRM.mult(A,B,C);
long time5 = System.currentTimeMillis();
System.out.println("Elapsed time "+(time5-time4)+" (ms)");
// Setting the number of threads to 1 then running am MT implementation actually calls completely different
// code than the regular function calls and will be less efficient. This will probably only be evident on
// small matrices though
// If the future we will provide a way to optionally automatically switch to concurrent implementations
// for larger when calling standard functions.
}
}
</syntaxhighlight>
e124d3ef64555c1736a9d00eff278440eef245b2
Example Graph Paths
0
63
288
2020-11-05T05:19:12Z
Peter
1
Created page with "Many Graph operations can be performed using linear algebra and this connection is the subject of much recent research. EJML now has basic "Graph BLAS" capabilities as this ex..."
wikitext
text/x-wiki
Many Graph operations can be performed using linear algebra and this connection is the subject of much recent research. EJML now has basic "Graph BLAS" capabilities as this example shows.
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.40/examples/src/org/ejml/example/ExampleGraphPaths.java ExampleGraphPaths.java code]
== Example Code ==
<syntaxhighlight lang="java">
/**
* Example including one iteration of the graph traversal algorithm breath-first-search (BFS),
* using different semirings. So following the outgoing relationships for a set of starting nodes.
*
* More about the connection between graphs and linear algebra can be found at:
* https://github.com/GraphBLAS/GraphBLAS-Pointers.
*
* @author Florentin Doerre
*/
public class ExampleGraphPaths {
private static final int NODE_COUNT = 4;
public static void main(String[] args) {
DMatrixSparseCSC adjacencyMatrix = new DMatrixSparseCSC(NODE_COUNT, 4);
// For the example we will be using the following graph:
// (3)<-[cost: 0.2]-(0)<-[cost: 0.1]->(2)<-[cost: 0.3]-(1)
adjacencyMatrix.set(0, 2, 0.1);
adjacencyMatrix.set(0, 3, 0.2);
adjacencyMatrix.set(2, 0, 0.1);
adjacencyMatrix.set(3, 2, 0.3);
// Semirings are used to redefine + and * f.i. with OR for + and AND for *
DSemiRing lor_land = DSemiRings.OR_AND;
DSemiRing min_times = DSemiRings.MIN_TIMES;
DSemiRing plus_land = new DSemiRing(DMonoids.PLUS, DMonoids.AND);
// sparse Vector (Matrix with one column)
DMatrixSparseCSC startNodes = new DMatrixSparseCSC(1, NODE_COUNT);
// setting the node 0 as the start-node
startNodes.set(0, 0, 1);
DMatrixSparseCSC outputVector = startNodes.createLike();
// Compute which nodes can be reached from the node 0 (disregarding the costs of the relationship)
CommonOpsWithSemiRing_DSCC.mult(startNodes, adjacencyMatrix, outputVector, lor_land, null, null);
System.out.println("Node 3 can be reached from node 0: " + (outputVector.get(0, 3) == 1));
System.out.println("Node 1 can be reached from node 0: " + (outputVector.get(0, 1) == 1));
// Add node 3 to the start nodes
startNodes.set(0, 3, 1);
// Find the number of path the nodes can be reached with
CommonOpsWithSemiRing_DSCC.mult(startNodes, adjacencyMatrix, outputVector, plus_land, null, null);
System.out.println("The number of start-nodes leading to node 2 is " + (int) outputVector.get(0, 2));
// Find the path with the minimal cost (direct connection from one of the specified starting nodes)
// the calculated cost equals the cost specified in the relationship (as both startNodes have a weight of 1)
// as an alternative you could use the MIN_PLUS semiring to consider the existing cost specified in the startNodes vector
CommonOpsWithSemiRing_DSCC.mult(startNodes, adjacencyMatrix, outputVector, min_times, null, null);
System.out.println("The minimal cost to reach the node 2 is " + outputVector.get(0, 2));
}
}
</syntaxhighlight>
bd292088424ba0721d03b899d285c9c78369f321
289
288
2020-11-05T05:19:56Z
Peter
1
wikitext
text/x-wiki
Many Graph operations can be performed using linear algebra and this connection is the subject of much recent research. EJML now has basic "Graph BLAS" capabilities as this example shows.
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.40/examples/src/org/ejml/example/ExampleGraphPaths.java ExampleGraphPaths.java]
== Example Code ==
<syntaxhighlight lang="java">
/**
* Example including one iteration of the graph traversal algorithm breath-first-search (BFS),
* using different semirings. So following the outgoing relationships for a set of starting nodes.
*
* More about the connection between graphs and linear algebra can be found at:
* https://github.com/GraphBLAS/GraphBLAS-Pointers.
*
* @author Florentin Doerre
*/
public class ExampleGraphPaths {
private static final int NODE_COUNT = 4;
public static void main(String[] args) {
DMatrixSparseCSC adjacencyMatrix = new DMatrixSparseCSC(NODE_COUNT, 4);
// For the example we will be using the following graph:
// (3)<-[cost: 0.2]-(0)<-[cost: 0.1]->(2)<-[cost: 0.3]-(1)
adjacencyMatrix.set(0, 2, 0.1);
adjacencyMatrix.set(0, 3, 0.2);
adjacencyMatrix.set(2, 0, 0.1);
adjacencyMatrix.set(3, 2, 0.3);
// Semirings are used to redefine + and * f.i. with OR for + and AND for *
DSemiRing lor_land = DSemiRings.OR_AND;
DSemiRing min_times = DSemiRings.MIN_TIMES;
DSemiRing plus_land = new DSemiRing(DMonoids.PLUS, DMonoids.AND);
// sparse Vector (Matrix with one column)
DMatrixSparseCSC startNodes = new DMatrixSparseCSC(1, NODE_COUNT);
// setting the node 0 as the start-node
startNodes.set(0, 0, 1);
DMatrixSparseCSC outputVector = startNodes.createLike();
// Compute which nodes can be reached from the node 0 (disregarding the costs of the relationship)
CommonOpsWithSemiRing_DSCC.mult(startNodes, adjacencyMatrix, outputVector, lor_land, null, null);
System.out.println("Node 3 can be reached from node 0: " + (outputVector.get(0, 3) == 1));
System.out.println("Node 1 can be reached from node 0: " + (outputVector.get(0, 1) == 1));
// Add node 3 to the start nodes
startNodes.set(0, 3, 1);
// Find the number of path the nodes can be reached with
CommonOpsWithSemiRing_DSCC.mult(startNodes, adjacencyMatrix, outputVector, plus_land, null, null);
System.out.println("The number of start-nodes leading to node 2 is " + (int) outputVector.get(0, 2));
// Find the path with the minimal cost (direct connection from one of the specified starting nodes)
// the calculated cost equals the cost specified in the relationship (as both startNodes have a weight of 1)
// as an alternative you could use the MIN_PLUS semiring to consider the existing cost specified in the startNodes vector
CommonOpsWithSemiRing_DSCC.mult(startNodes, adjacencyMatrix, outputVector, min_times, null, null);
System.out.println("The minimal cost to reach the node 2 is " + outputVector.get(0, 2));
}
}
</syntaxhighlight>
ff324146bc75960b7433434c7c3df88b7c35e873
Example Large Dense Matrices
0
64
290
2020-11-05T05:21:48Z
Peter
1
Created page with "Different approaches are required when writing high performance dense matrix operations for large matrices. For the most part, EJML will automatically switch to using these di..."
wikitext
text/x-wiki
Different approaches are required when writing high performance dense matrix operations for large matrices. For the most part, EJML will automatically switch to using these different approaches. However, it can make sense to call them directly and minimize memory usage and avoid converting matrices.
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.40/examples/src/org/ejml/example/OptimizingLargeMatrixPerformance.java OptimizingLargeMatrixPerformance.java]
== Example Code ==
<syntaxhighlight lang="java">
/**
* For many operations EJML provides block matrix support. These block or tiled matrices are designed to reduce
* the number of cache misses which can kill performance when working on large matrices. A critical tuning parameter
* is the block size and this is system specific. The example below shows you how this parameter can be optimized.
*
* @author Peter Abeles
*/
public class OptimizingLargeMatrixPerformance {
public static void main( String[] args ) {
// Create larger matrices to experiment with
var rand = new Random(0xBEEF);
DMatrixRMaj A = RandomMatrices_DDRM.rectangle(3000,3000,-1,1,rand);
DMatrixRMaj B = A.copy();
DMatrixRMaj C = A.createLike();
// Since we are dealing with larger matrices let's use the concurrent implementation. By default
UtilEjml.printTime("Row-Major Multiplication:",()-> CommonOps_MT_DDRM.mult(A,B,C));
// Converts A into a block matrix and creates a new matrix while leaving A unmodified
DMatrixRBlock Ab = MatrixOps_DDRB.convert(A);
// Converts A into a block matrix, but modifies it's internal array inplace. The returned block matrix
// will share the same data array as the input. Much more memory efficient, but you need to be careful.
DMatrixRBlock Bb = MatrixOps_DDRB.convertInplace(B,null,null);
DMatrixRBlock Cb = Ab.createLike();
// Since we are dealing with larger matrices let's use the concurrent implementation. By default
UtilEjml.printTime("Block Multiplication: ",()-> MatrixOps_MT_DDRB.mult(Ab,Bb,Cb));
// Can we make this faster? Probably by adjusting the block size. This is system dependent so let's
// try a range of values
int defaultBlockWidth = EjmlParameters.BLOCK_WIDTH;
System.out.println("Default Block Size: "+defaultBlockWidth);
for ( int block : new int[]{10,20,30,50,70,100,140,200,500}) {
EjmlParameters.BLOCK_WIDTH = block;
// Need to create the block matrices again since we changed the block size
DMatrixRBlock Ac = MatrixOps_DDRB.convert(A);
DMatrixRBlock Bc = MatrixOps_DDRB.convert(B);
DMatrixRBlock Cc = Ac.createLike();
UtilEjml.printTime("Block "+EjmlParameters.BLOCK_WIDTH+": ",()-> MatrixOps_MT_DDRB.mult(Ac,Bc,Cc));
}
// On my system the optimal block size is around 100 and has an improvement of about 5%
// On some architectures the improvement can be substantial in others the default value is very reasonable
// Some decompositions will switch to a block format automatically. Matrix multiplication might in the
// future and others too. The main reason this hasn't happened for it to be memory efficient it would
// need to modify then undo the modification for input matrices which would be very confusion if you're
// writing concurrent code.
}
}
</syntaxhighlight>
b6ab80657c4d1dc32cc771a9f245ff6a83104f80
291
290
2020-11-05T05:22:48Z
Peter
1
wikitext
text/x-wiki
Different approaches are required when writing high performance dense matrix operations for large matrices. For the most part, EJML will automatically switch to using these different approaches. A key parameter that needs to be tuned for specific systems is block size. It can also make sense to work directly with block matrices instead of assuming EJML does the best for your system.
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.40/examples/src/org/ejml/example/OptimizingLargeMatrixPerformance.java OptimizingLargeMatrixPerformance.java]
== Example Code ==
<syntaxhighlight lang="java">
/**
* For many operations EJML provides block matrix support. These block or tiled matrices are designed to reduce
* the number of cache misses which can kill performance when working on large matrices. A critical tuning parameter
* is the block size and this is system specific. The example below shows you how this parameter can be optimized.
*
* @author Peter Abeles
*/
public class OptimizingLargeMatrixPerformance {
public static void main( String[] args ) {
// Create larger matrices to experiment with
var rand = new Random(0xBEEF);
DMatrixRMaj A = RandomMatrices_DDRM.rectangle(3000,3000,-1,1,rand);
DMatrixRMaj B = A.copy();
DMatrixRMaj C = A.createLike();
// Since we are dealing with larger matrices let's use the concurrent implementation. By default
UtilEjml.printTime("Row-Major Multiplication:",()-> CommonOps_MT_DDRM.mult(A,B,C));
// Converts A into a block matrix and creates a new matrix while leaving A unmodified
DMatrixRBlock Ab = MatrixOps_DDRB.convert(A);
// Converts A into a block matrix, but modifies it's internal array inplace. The returned block matrix
// will share the same data array as the input. Much more memory efficient, but you need to be careful.
DMatrixRBlock Bb = MatrixOps_DDRB.convertInplace(B,null,null);
DMatrixRBlock Cb = Ab.createLike();
// Since we are dealing with larger matrices let's use the concurrent implementation. By default
UtilEjml.printTime("Block Multiplication: ",()-> MatrixOps_MT_DDRB.mult(Ab,Bb,Cb));
// Can we make this faster? Probably by adjusting the block size. This is system dependent so let's
// try a range of values
int defaultBlockWidth = EjmlParameters.BLOCK_WIDTH;
System.out.println("Default Block Size: "+defaultBlockWidth);
for ( int block : new int[]{10,20,30,50,70,100,140,200,500}) {
EjmlParameters.BLOCK_WIDTH = block;
// Need to create the block matrices again since we changed the block size
DMatrixRBlock Ac = MatrixOps_DDRB.convert(A);
DMatrixRBlock Bc = MatrixOps_DDRB.convert(B);
DMatrixRBlock Cc = Ac.createLike();
UtilEjml.printTime("Block "+EjmlParameters.BLOCK_WIDTH+": ",()-> MatrixOps_MT_DDRB.mult(Ac,Bc,Cc));
}
// On my system the optimal block size is around 100 and has an improvement of about 5%
// On some architectures the improvement can be substantial in others the default value is very reasonable
// Some decompositions will switch to a block format automatically. Matrix multiplication might in the
// future and others too. The main reason this hasn't happened for it to be memory efficient it would
// need to modify then undo the modification for input matrices which would be very confusion if you're
// writing concurrent code.
}
}
</syntaxhighlight>
b725f52561e0f4ea0a09c1cc1c013a35ad3dbefb
Users
0
3
295
12
2020-12-23T11:50:28Z
Peter
1
wikitext
text/x-wiki
= Projects which use EJML =
Feel free to add your own project!
* [https://www.db.bme.hu/preprints/thesis2018-multidimensional-graph-analysis.pdf Petra Várhegyi's masters thesis on graph analysis]
* [http://wiki.industrial-craft.net Industrial Craft 2] modification for minecraft
* [http://www-lium.univ-lemans.fr/diarization/doku.php/ LIUM_SpkDiarization] is a software dedicated to speaker diarization (ie speaker segmentation and clustering).
* [http://researchers.lille.inria.fr/~freno/JProGraM.html JProGraM]: Library for learning a number of statistical models from data.
* [http://code.google.com/p/gogps/ goGPS]: Improve the positioning accuracy of low-cost GPS devices by RTK technique.
* [http://www-edc.eng.cam.ac.uk/tools/set_visualiser/ Set Visualiser]: Visualises the way that a number of items is classified into one or more categories or sets using Euler diagrams.
* Universal Java Matrix Library (UJML): http://www.ujmp.org/
* Scalalab: http://code.google.com/p/scalalab/
* Java Content Based Image Retrieval (JCBIR): http://code.google.com/p/jcbir/
* JLabGroovy: http://code.google.com/p/jlabgroovy/
* JquantLib (Will be added): http://www.jquantlib.org/
* Matlube: https://github.com/hohonuuli/matlube
* Geometric Regression Library: http://georegression.org/
* BoofCV: Computer Vision Library: http://boofcv.org/
* ICY: bio-imaging: http://www.bioimageanalysis.com/icy/
* JSkills: Java implementation of TrueSkill algorithm https://github.com/nsp/JSkills
* Portfolio applets at http://www.christoph-junge.de/optimizer.php
* Distributed Control Framework (DCF) http://www.i-a-i.com/dcfpro/
* JptView point cloud viewer: http://www.seas.upenn.edu/~aiv/jptview/
* JPrIME Bayesian phylogenetics library: http://code.google.com/p/jprime/
* J-Matrix quantum mechanics scattering https://code.google.com/p/jmatrix/
* DDogleg Numerics: http://ddogleg.org
* Saddle: http://saddle.github.io/doc/index.html
* GDSC ImageJ Plugins: http://www.sussex.ac.uk/gdsc/intranet/microscopy/imagej/gdsc_plugins
* Robot Controller for Humanoid Robots: http://www.ihmc.us/Research/projects/HumanoidRobots/index.html
* Credit Analytics: http://code.google.com/p/creditanalytics
* Spline Library: http://code.google.com/p/splinelibrary - http://www.credit-trader.org/CreditSuite/docs/SplineLibrary_2.2.pdf
* Fixed Point Finder: http://code.google.com/p/rootfinder - http://www.credit-trader.org/CreditSuite/docs/FixedPointFinder_2.2.pdf
* Sensitivity generation scheme in Credit Analytics: http://www.credit-trader.org/CreditSuite/docs/SensitivityGenerator_2.2.pdf
* Stanford CoreNLP: A set of natural language analysis tools: http://nlp.stanford.edu/software/corenlp.shtml
* OpenChrom: Open source software for the mass spectrometric analysis of chromatographic data. https://www.openchrom.net
= Papers That Cite EJML =
* Zewdie, Dawit Dawit Habtamu. "Representation discovery in non-parametric reinforcement learning." Diss. Massachusetts Institute of Technology, 2014.
* Sanfilippo, Filippo, et al. "A mapping approach for controlling different maritime cranes and robots using ANN." Mechatronics and Automation (ICMA), 2014 IEEE International Conference on. IEEE, 2014.
* Kushman, Nate, et al. "Learning to automatically solve algebra word problems." ACL (1) (2014): 271-281.
* Stergios Papadimitriou, Seferina Mavroudi, Kostas Theofilatos, and Spiridon Likothanasis, “MATLAB-Like Scripting of Java Scientific Libraries in ScalaLab,” Scientific Programming, vol. 22, no. 3, pp. 187-199, 2014.
* Alberto Castellini, Daniele Paltrinieri, and Vincenzo Manca "MP-GeneticSynth: Inferring Biological Network Regulations from Time Series" Bioinformatics 2014
* Blasinski, H., Bulan, O., & Sharma, G. (2013). Per-Colorant-Channel Color Barcodes for Mobile Applications: An Interference Cancellation Framework.
* Marin, R. C., & Dobre, C. (2013, November). Reaching for the clouds: contextually enhancing smartphones for energy efficiency. In Proceedings of the 2nd ACM workshop on High performance mobile opportunistic systems (pp. 31-38). ACM.
* Oletic, D., Skrapec, M., & Bilas, V. (2013). Monitoring Respiratory Sounds: Compressed Sensing Reconstruction via OMP on Android Smartphone. In Wireless Mobile Communication and Healthcare (pp. 114-121). Springer Berlin Heidelberg.
* Santhiar, Anirudh and Pandita, Omesh and Kanade, Aditya "Discovering Math APIs by Mining Unit Tests" Fundamental Approaches to Software Engineering 2013
* Sanjay K. Boddhu, Robert L. Williams, Edward Wasser, Niranjan Kode, "Increasing Situational Awareness using Smartphones" Proc. SPIE 8389, Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR III, 83891J (May 1, 2012)
* J. A. Álvarez-Bermejo, N. Antequera, R. García-Rubio and J. A. López-Ramos, _"A scalable server for key distribution and its application to accounting,"_ The Journal of Supercomputing, 2012
* Realini E., Yoshida D., Reguzzoni M., Raghavan V., _"Enhanced satellite positioning as a web service with goGPS open source software"_. Applied Geomatics 4(2), 135-142. 2012
* Stergios Papadimitriou, Constantinos Terzidis, Seferina Mavroudi, Spiridon D. Likothanassis: _Exploiting java scientific libraries with the scala language within the scalalab environment._ IET Software 5(6): 543-551 (2011)
* L. T. Lim, B. Ranaivo-Malançon and E. K. Tang. _“Symbiosis Between a Multilingual Lexicon and Translation Example Banks”._ In: Procedia: Social and Behavioral Sciences 27 (2011), pp. 61–69.
* G. Taboada, S. Ramos, R. Expósito, J. Touriño, R. Doallo, _Java in the High Performance Computing arena: Research, practice and experience,_ Science of Computer Programming, 2011.
* http://geomatica.como.polimi.it/presentazioni/Osaka_Summer_goGPS.pdf
* http://www.holger-arndt.de/library/MLOSS2010.pdf
* http://www.ateji.com/px/whitepapers/Ateji%20PX%20MatMult%20Whitepaper%20v1.2.pdf
Note: Slowly working on an EJML paper for publication. About 1/2 way through a first draft.
= On The Web =
* https://softwarerecs.stackexchange.com/questions/51330/sparse-matrix-library-for-java
* https://lessthanoptimal.github.io/Java-Matrix-Benchmark/
* http://java.dzone.com/announcements/introduction-efficient-java
* https://shakthydoss.wordpress.com/2011/01/13/jama-shortcoming/
* Various questions on stackoverflow.com
4c2caa12a66aaf4616b4e49303338e1edf6a1735
296
295
2020-12-23T11:54:25Z
Peter
1
wikitext
text/x-wiki
= Projects which use EJML =
Feel free to add your own project!
* A ton of [https://scholar.google.com/scholar?q=%22efficient+java+matrix+library%22&hl=en&as_sdt=0,5 academic papers]
* [https://www.db.bme.hu/preprints/thesis2018-multidimensional-graph-analysis.pdf Petra Várhegyi's masters thesis on graph analysis]
* [http://wiki.industrial-craft.net Industrial Craft 2] modification for minecraft
* [http://www-lium.univ-lemans.fr/diarization/doku.php/ LIUM_SpkDiarization] is a software dedicated to speaker diarization (ie speaker segmentation and clustering).
* [http://researchers.lille.inria.fr/~freno/JProGraM.html JProGraM]: Library for learning a number of statistical models from data.
* [http://code.google.com/p/gogps/ goGPS]: Improve the positioning accuracy of low-cost GPS devices by RTK technique.
* [http://www-edc.eng.cam.ac.uk/tools/set_visualiser/ Set Visualiser]: Visualises the way that a number of items is classified into one or more categories or sets using Euler diagrams.
* Universal Java Matrix Library (UJML): http://www.ujmp.org/
* Scalalab: http://code.google.com/p/scalalab/
* Java Content Based Image Retrieval (JCBIR): http://code.google.com/p/jcbir/
* JLabGroovy: http://code.google.com/p/jlabgroovy/
* JquantLib (Will be added): http://www.jquantlib.org/
* Matlube: https://github.com/hohonuuli/matlube
* Geometric Regression Library: http://georegression.org/
* BoofCV: Computer Vision Library: http://boofcv.org/
* ICY: bio-imaging: http://www.bioimageanalysis.com/icy/
* JSkills: Java implementation of TrueSkill algorithm https://github.com/nsp/JSkills
* Portfolio applets at http://www.christoph-junge.de/optimizer.php
* Distributed Control Framework (DCF) http://www.i-a-i.com/dcfpro/
* JptView point cloud viewer: http://www.seas.upenn.edu/~aiv/jptview/
* JPrIME Bayesian phylogenetics library: http://code.google.com/p/jprime/
* J-Matrix quantum mechanics scattering https://code.google.com/p/jmatrix/
* DDogleg Numerics: http://ddogleg.org
* Saddle: http://saddle.github.io/doc/index.html
* GDSC ImageJ Plugins: http://www.sussex.ac.uk/gdsc/intranet/microscopy/imagej/gdsc_plugins
* Robot Controller for Humanoid Robots: http://www.ihmc.us/Research/projects/HumanoidRobots/index.html
* Credit Analytics: http://code.google.com/p/creditanalytics
* Spline Library: http://code.google.com/p/splinelibrary - http://www.credit-trader.org/CreditSuite/docs/SplineLibrary_2.2.pdf
* Fixed Point Finder: http://code.google.com/p/rootfinder - http://www.credit-trader.org/CreditSuite/docs/FixedPointFinder_2.2.pdf
* Sensitivity generation scheme in Credit Analytics: http://www.credit-trader.org/CreditSuite/docs/SensitivityGenerator_2.2.pdf
* Stanford CoreNLP: A set of natural language analysis tools: http://nlp.stanford.edu/software/corenlp.shtml
* OpenChrom: Open source software for the mass spectrometric analysis of chromatographic data. https://www.openchrom.net
= Papers That Cite EJML =
* Zewdie, Dawit Dawit Habtamu. "Representation discovery in non-parametric reinforcement learning." Diss. Massachusetts Institute of Technology, 2014.
* Sanfilippo, Filippo, et al. "A mapping approach for controlling different maritime cranes and robots using ANN." Mechatronics and Automation (ICMA), 2014 IEEE International Conference on. IEEE, 2014.
* Kushman, Nate, et al. "Learning to automatically solve algebra word problems." ACL (1) (2014): 271-281.
* Stergios Papadimitriou, Seferina Mavroudi, Kostas Theofilatos, and Spiridon Likothanasis, “MATLAB-Like Scripting of Java Scientific Libraries in ScalaLab,” Scientific Programming, vol. 22, no. 3, pp. 187-199, 2014.
* Alberto Castellini, Daniele Paltrinieri, and Vincenzo Manca "MP-GeneticSynth: Inferring Biological Network Regulations from Time Series" Bioinformatics 2014
* Blasinski, H., Bulan, O., & Sharma, G. (2013). Per-Colorant-Channel Color Barcodes for Mobile Applications: An Interference Cancellation Framework.
* Marin, R. C., & Dobre, C. (2013, November). Reaching for the clouds: contextually enhancing smartphones for energy efficiency. In Proceedings of the 2nd ACM workshop on High performance mobile opportunistic systems (pp. 31-38). ACM.
* Oletic, D., Skrapec, M., & Bilas, V. (2013). Monitoring Respiratory Sounds: Compressed Sensing Reconstruction via OMP on Android Smartphone. In Wireless Mobile Communication and Healthcare (pp. 114-121). Springer Berlin Heidelberg.
* Santhiar, Anirudh and Pandita, Omesh and Kanade, Aditya "Discovering Math APIs by Mining Unit Tests" Fundamental Approaches to Software Engineering 2013
* Sanjay K. Boddhu, Robert L. Williams, Edward Wasser, Niranjan Kode, "Increasing Situational Awareness using Smartphones" Proc. SPIE 8389, Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR III, 83891J (May 1, 2012)
* J. A. Álvarez-Bermejo, N. Antequera, R. García-Rubio and J. A. López-Ramos, _"A scalable server for key distribution and its application to accounting,"_ The Journal of Supercomputing, 2012
* Realini E., Yoshida D., Reguzzoni M., Raghavan V., _"Enhanced satellite positioning as a web service with goGPS open source software"_. Applied Geomatics 4(2), 135-142. 2012
* Stergios Papadimitriou, Constantinos Terzidis, Seferina Mavroudi, Spiridon D. Likothanassis: _Exploiting java scientific libraries with the scala language within the scalalab environment._ IET Software 5(6): 543-551 (2011)
* L. T. Lim, B. Ranaivo-Malançon and E. K. Tang. _“Symbiosis Between a Multilingual Lexicon and Translation Example Banks”._ In: Procedia: Social and Behavioral Sciences 27 (2011), pp. 61–69.
* G. Taboada, S. Ramos, R. Expósito, J. Touriño, R. Doallo, _Java in the High Performance Computing arena: Research, practice and experience,_ Science of Computer Programming, 2011.
* http://geomatica.como.polimi.it/presentazioni/Osaka_Summer_goGPS.pdf
* http://www.holger-arndt.de/library/MLOSS2010.pdf
* http://www.ateji.com/px/whitepapers/Ateji%20PX%20MatMult%20Whitepaper%20v1.2.pdf
Note: Slowly working on an EJML paper for publication. About 1/2 way through a first draft.
= On The Web =
* https://softwarerecs.stackexchange.com/questions/51330/sparse-matrix-library-for-java
* https://lessthanoptimal.github.io/Java-Matrix-Benchmark/
* http://java.dzone.com/announcements/introduction-efficient-java
* https://shakthydoss.wordpress.com/2011/01/13/jama-shortcoming/
* Various questions on stackoverflow.com
3cccfb3bd19cc79da40d288cd778849818d7134f
297
296
2021-01-05T03:55:24Z
Peter
1
wikitext
text/x-wiki
= Projects which use EJML =
Feel free to add your own project!
* [https://neo4j.com/ Neo4J]'s graph-data-science library.
* [https://www.db.bme.hu/preprints/thesis2018-multidimensional-graph-analysis.pdf Petra Várhegyi's masters thesis on graph analysis]
* [http://wiki.industrial-craft.net Industrial Craft 2] modification for minecraft
* [http://www-lium.univ-lemans.fr/diarization/doku.php/ LIUM_SpkDiarization] is a software dedicated to speaker diarization (ie speaker segmentation and clustering).
* [http://researchers.lille.inria.fr/~freno/JProGraM.html JProGraM]: Library for learning a number of statistical models from data.
* [http://code.google.com/p/gogps/ goGPS]: Improve the positioning accuracy of low-cost GPS devices by RTK technique.
* [http://www-edc.eng.cam.ac.uk/tools/set_visualiser/ Set Visualiser]: Visualises the way that a number of items is classified into one or more categories or sets using Euler diagrams.
* Universal Java Matrix Library (UJML): http://www.ujmp.org/
* Scalalab: http://code.google.com/p/scalalab/
* Java Content Based Image Retrieval (JCBIR): http://code.google.com/p/jcbir/
* JLabGroovy: http://code.google.com/p/jlabgroovy/
* JquantLib (Will be added): http://www.jquantlib.org/
* Matlube: https://github.com/hohonuuli/matlube
* Geometric Regression Library: http://georegression.org/
* BoofCV: Computer Vision Library: http://boofcv.org/
* ICY: bio-imaging: http://www.bioimageanalysis.com/icy/
* JSkills: Java implementation of TrueSkill algorithm https://github.com/nsp/JSkills
* Portfolio applets at http://www.christoph-junge.de/optimizer.php
* Distributed Control Framework (DCF) http://www.i-a-i.com/dcfpro/
* JptView point cloud viewer: http://www.seas.upenn.edu/~aiv/jptview/
* JPrIME Bayesian phylogenetics library: http://code.google.com/p/jprime/
* J-Matrix quantum mechanics scattering https://code.google.com/p/jmatrix/
* DDogleg Numerics: http://ddogleg.org
* Saddle: http://saddle.github.io/doc/index.html
* GDSC ImageJ Plugins: http://www.sussex.ac.uk/gdsc/intranet/microscopy/imagej/gdsc_plugins
* Robot Controller for Humanoid Robots: http://www.ihmc.us/Research/projects/HumanoidRobots/index.html
* Credit Analytics: http://code.google.com/p/creditanalytics
* Spline Library: http://code.google.com/p/splinelibrary - http://www.credit-trader.org/CreditSuite/docs/SplineLibrary_2.2.pdf
* Fixed Point Finder: http://code.google.com/p/rootfinder - http://www.credit-trader.org/CreditSuite/docs/FixedPointFinder_2.2.pdf
* Sensitivity generation scheme in Credit Analytics: http://www.credit-trader.org/CreditSuite/docs/SensitivityGenerator_2.2.pdf
* Stanford CoreNLP: A set of natural language analysis tools: http://nlp.stanford.edu/software/corenlp.shtml
* OpenChrom: Open source software for the mass spectrometric analysis of chromatographic data. https://www.openchrom.net
= Papers That Cite EJML =
* A ton of [https://scholar.google.com/scholar?q=%22efficient+java+matrix+library%22&hl=en&as_sdt=0,5 academic papers]
* Zewdie, Dawit Dawit Habtamu. "Representation discovery in non-parametric reinforcement learning." Diss. Massachusetts Institute of Technology, 2014.
* Sanfilippo, Filippo, et al. "A mapping approach for controlling different maritime cranes and robots using ANN." Mechatronics and Automation (ICMA), 2014 IEEE International Conference on. IEEE, 2014.
* Kushman, Nate, et al. "Learning to automatically solve algebra word problems." ACL (1) (2014): 271-281.
* Stergios Papadimitriou, Seferina Mavroudi, Kostas Theofilatos, and Spiridon Likothanasis, “MATLAB-Like Scripting of Java Scientific Libraries in ScalaLab,” Scientific Programming, vol. 22, no. 3, pp. 187-199, 2014.
* Alberto Castellini, Daniele Paltrinieri, and Vincenzo Manca "MP-GeneticSynth: Inferring Biological Network Regulations from Time Series" Bioinformatics 2014
* Blasinski, H., Bulan, O., & Sharma, G. (2013). Per-Colorant-Channel Color Barcodes for Mobile Applications: An Interference Cancellation Framework.
* Marin, R. C., & Dobre, C. (2013, November). Reaching for the clouds: contextually enhancing smartphones for energy efficiency. In Proceedings of the 2nd ACM workshop on High performance mobile opportunistic systems (pp. 31-38). ACM.
* Oletic, D., Skrapec, M., & Bilas, V. (2013). Monitoring Respiratory Sounds: Compressed Sensing Reconstruction via OMP on Android Smartphone. In Wireless Mobile Communication and Healthcare (pp. 114-121). Springer Berlin Heidelberg.
* Santhiar, Anirudh and Pandita, Omesh and Kanade, Aditya "Discovering Math APIs by Mining Unit Tests" Fundamental Approaches to Software Engineering 2013
* Sanjay K. Boddhu, Robert L. Williams, Edward Wasser, Niranjan Kode, "Increasing Situational Awareness using Smartphones" Proc. SPIE 8389, Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR III, 83891J (May 1, 2012)
* J. A. Álvarez-Bermejo, N. Antequera, R. García-Rubio and J. A. López-Ramos, _"A scalable server for key distribution and its application to accounting,"_ The Journal of Supercomputing, 2012
* Realini E., Yoshida D., Reguzzoni M., Raghavan V., _"Enhanced satellite positioning as a web service with goGPS open source software"_. Applied Geomatics 4(2), 135-142. 2012
* Stergios Papadimitriou, Constantinos Terzidis, Seferina Mavroudi, Spiridon D. Likothanassis: _Exploiting java scientific libraries with the scala language within the scalalab environment._ IET Software 5(6): 543-551 (2011)
* L. T. Lim, B. Ranaivo-Malançon and E. K. Tang. _“Symbiosis Between a Multilingual Lexicon and Translation Example Banks”._ In: Procedia: Social and Behavioral Sciences 27 (2011), pp. 61–69.
* G. Taboada, S. Ramos, R. Expósito, J. Touriño, R. Doallo, _Java in the High Performance Computing arena: Research, practice and experience,_ Science of Computer Programming, 2011.
* http://geomatica.como.polimi.it/presentazioni/Osaka_Summer_goGPS.pdf
* http://www.holger-arndt.de/library/MLOSS2010.pdf
* http://www.ateji.com/px/whitepapers/Ateji%20PX%20MatMult%20Whitepaper%20v1.2.pdf
Note: Slowly working on an EJML paper for publication. About 1/2 way through a first draft.
= On The Web =
* https://softwarerecs.stackexchange.com/questions/51330/sparse-matrix-library-for-java
* https://lessthanoptimal.github.io/Java-Matrix-Benchmark/
* http://java.dzone.com/announcements/introduction-efficient-java
* https://shakthydoss.wordpress.com/2011/01/13/jama-shortcoming/
* Various questions on stackoverflow.com
17dd4ec519c4d0c34722eb64fac35bc2b62af44a
Capabilities
0
33
298
90
2021-01-23T16:04:40Z
Peter
1
/* Linear Algebra Capabilities */
wikitext
text/x-wiki
= Linear Algebra Capabilities =
{| class="wikitable"
! !! Dense Real !! Fixed real !! Dense Complex || Sparse Real
|-
| Basic Arithmetic || X || X || X || X
|-
| Element-Wise Ops || X || X || X || X
|-
| Transpose || X || X || X || X
|-
| Determinant || X || X || X || X
|-
| Norm || X || || X || X
|-
| Inverse || X || X || X || X
|-
| Solve m=n || X || || X || X
|-
| Solve m>n || X || || X || X
|-
| LU || X || || X || X
|-
| Cholesky || X || || X || X
|-
| QR || X || || X || X
|-
| QRP || X || || ||
|-
| SVD || X || || ||
|-
| Eigen Symm || X || || ||
|-
| Eigen General || X || || ||
|}
The above table summarizes at a high level the capabilities available by matrix type. To see a complete list of features check out the following classes and factories. Note that the capabilities also vary by which interface you use. See interface specific documentation for that information. Procedural interface supports everything.
{| class="wikitable"
! Dense Real !! Fixed real !! Dense Complex || Sparse Real
|-
| {{OpsDocLink|CommonOps}} || {{DocLink|org/ejml/alg/fixed/FixedOps3.html|FixedOps}} || {{OpsDocLink|CCommonOps}} ||
|-
| {{OpsDocLink|EigenOps}} || ||
|-
| {{OpsDocLink|MatrixFeatures}} || || {{OpsDocLink|CMatrixFeatures}}
|-
| {{OpsDocLink|MatrixVisualization}} || ||
|-
| {{OpsDocLink|NormOps}} || || {{OpsDocLink|CNormOps}}
|-
| {{OpsDocLink|RandomMatrices}} || || {{OpsDocLink|CRandomMatrices}}
|-
| {{OpsDocLink|SingularOps}} || ||
|-
| {{OpsDocLink|SpecializedOps}} || || {{OpsDocLink|CSpecializedOps}}
|}
= Other Features =
File IO
Visualization
7f6e63bd5bd235eaa97ed969ad9aabd5f8034e76
304
298
2021-01-23T16:33:26Z
Peter
1
wikitext
text/x-wiki
= Linear Algebra Capabilities =
{| class="wikitable"
! !! Dense Real !! Fixed real !! Dense Complex || Sparse Real
|-
| Basic Arithmetic || X || X || X || X
|-
| Element-Wise Ops || X || X || X || X
|-
| Transpose || X || X || X || X
|-
| Determinant || X || X || X || X
|-
| Norm || X || || X || X
|-
| Inverse || X || X || X || X
|-
| Solve m=n || X || || X || X
|-
| Solve m>n || X || || X || X
|-
| LU || X || || X || X
|-
| Cholesky || X || || X || X
|-
| QR || X || || X || X
|-
| QRP || X || || ||
|-
| SVD || X || || ||
|-
| Eigen Symm || X || || ||
|-
| Eigen General || X || || ||
|}
The above table summarizes at a high level the capabilities available by matrix type. To see a complete list of features check out the following classes and factories. Note that the capabilities also vary by which interface you use. See interface specific documentation for that information. Procedural interface supports everything.
{| class="wikitable"
! Ops !! Dense Real !! Fixed real !! Dense Complex || Sparse Real
|-
| CommonOps
| {{OpsDocLink|row/CommonOps_DDRM|X}} || {{OpsDocLink|fixed/CommonOps_DDF3|X}} || {{OpsDocLink|row/CommonOps_ZDRM|X}} || {{SparseLink|csc/CommonOps_DSCC|X}}
|-
| EigenOps
| {{OpsDocLink|row/EigenOps_DDRM|X}} || || ||
|-
| MatrixFeatures
| {{OpsDocLink|row/MatrixFeatures_DDRM|X}} || || {{OpsDocLink|row/MatrixFeatures_ZDRM|X}} || {{SparseLink|csc/MatrixFeatures_DSCC|X}}
|-
| MatrixVisualization
| {{OpsDocLink|row/MatrixVisualization_DDRM|X}} || || ||
|-
| NormOps
| {{OpsDocLink|row/NormOps_DDRM|X}} || || {{OpsDocLink|row/NormOps_ZDRM|X}} || {{SparseLink|csc/NormOps_DSCC|X}}
|-
| RandomMatrices
| {{OpsDocLink|row/RandomMatrices_DDRM|X}} || || {{OpsDocLink|row/RandomMatrices_ZDRM|X}} || {{SparseLink|csc/RandomMatrices_DSCC|X}}
|-
| SingularOps
| {{OpsDocLink|row/SingularOps_DDRM|X}} || || ||
|-
| SpecializedOps
| {{OpsDocLink|row/SpecializedOps_DDRM|X}} || || {{OpsDocLink|row/SpecializedOps_ZDRM|X}} ||
|}
= Other Features =
File IO
Visualization
04c7510a49672e1c375519b7eb46ce19430b1e07
Template:OpsDocLink
10
32
299
210
2021-01-23T16:11:04Z
Peter
1
wikitext
text/x-wiki
{{DocLink|org/ejml/{{{2}}}.html|{{{1}}} }}
db7ff1c935b247217943ededc627122c1b2533c1
300
299
2021-01-23T16:14:07Z
Peter
1
wikitext
text/x-wiki
{{DocLink|org/ejml/{{{1}}}.html|{{{2}}} }}
b17cfeaf05aba222fb8b56cacc9215ddab2ded81
301
300
2021-01-23T16:18:00Z
Peter
1
wikitext
text/x-wiki
{{DocLink|org/ejml/dense/row/{{{1}}}.html|{{{2}}} }}
eafb916e71793e5ff5b67dbe550b8c26d0a53fac
302
301
2021-01-23T16:20:05Z
Peter
1
wikitext
text/x-wiki
{{DocLink|org/ejml/dense/{{{1}}}.html|{{{2}}} }}
2f087261ea32619df001760477890200535bcc16
Template:SparseLink
10
65
303
2021-01-23T16:22:40Z
Peter
1
Created page with "{{DocLink|org/ejml/sparse/{{{1}}}.html|{{{2}}} }}"
wikitext
text/x-wiki
{{DocLink|org/ejml/sparse/{{{1}}}.html|{{{2}}} }}
b48a4b20f7bacf0af57df2d08a48ca6b073ce251
Input and Output
0
23
305
228
2021-02-18T02:36:55Z
Peter
1
wikitext
text/x-wiki
EJML provides several different methods for loading, saving, and displaying a matrix. A matrix can be saved and loaded from a file, displayed visually in a window, printed to the console, created from raw arrays or strings.
__TOC__
= Text Output =
A matrix can be printed to standard out using its built in ''print()'' command, this works for both DMatrixRMaj and SimpleMatrix. To create a custom output the user can provide a formatting string that is compatible with printf().
Code:
<syntaxhighlight lang="java">
public static void main( String []args ) {
DMatrixRMaj A = new DMatrixRMaj(2,3,true,1.1,2.34,3.35436,4345,59505,0.00001234);
A.print();
System.out.println();
A.print("%e");
System.out.println();
A.print("%10.2f");
}
</syntaxhighlight>
Output:
<pre>
Type = dense real , numRows = 2 , numCols = 3
1.100 2.340 3.354
4345.000 59505.000 0.000
Type = dense real , numRows = 2 , numCols = 3
1.100000e+00 2.340000e+00 3.354360e+00
4.345000e+03 5.950500e+04 1.234000e-05
Type = dense real , numRows = 2 , numCols = 3
1.10 2.34 3.35
4345.00 59505.00 0.00
</pre>
= CSV Input/Outut =
A Column Space Value (CSV) reader and writer is provided by EJML. The advantage of this file format is that it's human readable, the disadvantage is that its large and slow. Two CSV formats are supported, one where the first line specifies the matrix dimension and the other the user specifies it pro grammatically.
In the example below, the matrix size and type is specified in the first line; row, column, and real/complex. The remainder of the file contains the value of each element in the matrix in a row-major format. A file containing
<pre>
2 3 real
2.4 6.7 9
-2 3 5
</pre>
would describe a real matrix with 2 rows and 3 columns.
DMatrixRMaj Example:
<syntaxhighlight lang="java">
public static void main( String args[] ) {
DMatrixRMaj A = new DMatrixRMaj(2,3,true,new double[]{1,2,3,4,5,6});
try {
MatrixIO.saveCSV(A, "matrix_file.csv");
DMatrixRMaj B = MatrixIO.loadCSV("matrix_file.csv");
B.print();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
</syntaxhighlight>
SimpleMatrix Example:
<syntaxhighlight lang="java">
public static void main( String args[] ) {
SimpleMatrix A = new SimpleMatrix(2,3,true,new double[]{1,2,3,4,5,6});
try {
A.saveToFileCSV("matrix_file.csv");
SimpleMatrix B = SimpleMatrix.loadCSV("matrix_file.csv");
B.print();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
</syntaxhighlight>
= Matlab =
Want to read and write EJML matrices in matlab format? HEBI Robotic's got you covered with their library https://github.com/HebiRobotics/MFL
= Visual Display =
Understanding the state of a matrix from text output can be difficult, especially for large matrices. To help in these situations a visual way of viewing a matrix is provided in DMatrixVisualization. By calling MatrixIO.show() a window will be created that shows the matrix. Positive elements will appear as a shade of red, negative ones as a shade of blue, and zeros as black. How red or blue an element is depends on its magnitude.
Example Code:
<syntaxhighlight lang="java">
public static void main( String args[] ) {
DMatrixRMaj A = new DMatrixRMaj(4,4,true,
0,2,3,4,-2,0,2,3,-3,-2,0,2,-4,-3,-2,0);
MatrixIO.show(A,"Small Matrix");
DMatrixRMaj B = new DMatrixRMaj(25,50);
for( int i = 0; i < 25; i++ )
B.set(i,i,i+1);
MatrixIO.show(B,"Larger Diagonal Matrix");
}
</syntaxhighlight>
Output:
{|
| http://ejml.org/wiki/MY_IMAGES/small_matrix.gif || http://ejml.org/wiki/MY_IMAGES/larger_matrix.gif
|}
= Deprecated =
The binary format which used Java serialization has been deprecated and will be removed in the not to distant future. Basically it's now considered a significant security risk.
https://medium.com/swlh/hacking-java-deserialization-7625c8450334
In the future a new binary format might be provided (you can request this on Github) but for now you can use the Matlab format discussed above.
514b12fa2c2a0d67d0a50c83695eb43b620b72f9
Procedural
0
28
308
223
2021-03-24T15:58:15Z
Peter
1
wikitext
text/x-wiki
The procedural interface in EJML provides access to all of its capabilities and provides much more control over which algorithms are used and when memory is created. The downside to this increased control is the added difficulty in programming, kinda resembles writing in assembly. Code can be made very efficient, but managing all the temporary data structures can be tedious.
The procedural supports all matrix types in EJML and follows a consistent naming pattern across all matrix types. Ops classes end in a suffix that indicate which type of matrix they can process. From the matrix name you can determine the type of element (float,double,real,complex) and it's internal data structure, e.g. row-major or block. In general, almost everyone will want to interact with row major matrices. Conversion to block format is done automatically internally when it becomes advantageous.
{| class="wikitable"
! Matrix Name !! Description !! Suffix
|-
| {{DataDocLink|DMatrixRMaj}} || Dense Double Real - Row Major || DDRM
|-
| {{DataDocLink|FMatrixRMaj}} || Dense Float Real - Row Major || FDRM
|-
| {{DataDocLink|ZDMatrixRMaj}} || Dense Double Complex - Row Major || ZDRM
|-
| {{DataDocLink|CDMatrixRMaj}} || Dense Float Complex - Row Major || CDRM
|-
| {{DataDocLink|DMatrixSparseCSC}} || Sparse Double Real - Compressed Column || DSCC
|-
| {{DataDocLink|DMatrixSparseTriplet}} || Sparse Double Real - Triplet || DSTL
|-
| {{DocLink|org/ejml/data/DMatrix3x3.html|DMatrix3x3}} || Dense Double Real 3x3 || DDF3
|-
| {{DocLink|org/ejml/data/DMatrix3.html|DMatrix3}} || Dense Double Real 3 || DDF3
|-
| {{DocLink|org/ejml/data/FMatrix3x3.html|FMatrix3x3}} || Dense Float Real 3x3 || FDF3
|-
| {{DocLink|org/ejml/data/FMatrix3.html|FMatrix3}} || Dense Float Real 3 || FDF3
|}
Fixed sized matrix from 2 to 6 are supported. Just replaced the 3 with the desired size. ''NOTE: In previous versions of EJML the matrix DMatrixRMaj was known as DenseMatrix64F.''
= Matrix Element Accessors =
* get( row , col )
* set( row , col , value )
** Returns or sets the value of an element at the specified row and column.
* unsafe_get( row , col )
* unsafe_set( row , col , value )
** Faster version of get() or set() that does not perform bounds checking.
* get( index )
* set( index )
** Returns or sets the value of an element at the specified index. Useful for vectors and element-wise operations.
* iterator( boolean rowMajor, int minRow, int minCol, int maxRow, int maxCol )
** An iterator that iterates through the sub-matrix by row or by column.
= Operations Classes =
Several "Ops" classes provide functions for manipulating different types of matrices and most are contained inside of the org.ejml.dense.* package, where * is replaced with the matrix structure package type, e.g. row for row-major. The list below is provided for DMatrixRMaj, other matrix can be found by changing the suffix as discussed above.
; {{DocLink|org/ejml/dense/row/CommonOps_DDRM.html|CommonOps_DDRM}} : Provides the most common matrix operations.
; {{DocLink|org/ejml/dense/row/EigenOps_DDRM.html|EigenOps_DDRM}} : Provides operations related to eigenvalues and eigenvectors.
; {{DocLink|org/ejml/dense/row/MatrixFeatures_DDRM.html|MatrixFeatures_DDRM}} : Used to compute various features related to a matrix.
; {{DocLink|org/ejml/dense/row/NormOps_DDRM.html|NormOps_DDRM}} : Operations for computing different matrix norms.
; {{DocLink|org/ejml/dense/row/SingularOps_DDRM.html|SingularOps_DDRM}} : Operations related to singular value decompositions.
; {{DocLink|org/ejml/dense/row/SpecializedOps_DDRM.html|SpecializedOps_DDRM}} : Grab bag for operations which do not fit in anywhere else.
; {{DocLink|org/ejml/dense/row/RandomMatrices_DDRM.html|RandomMatrices_DDRM}} : Used to create different types of random matrices.
For fixed sized matrices FixedOpsN is provided, where N = 2 to 6. FixedOpsN is similar in functionality to CommonOps.
f0f4214dd826a216c4a1c68bcf6177871a5cb4eb
Acknowledgments
0
5
309
15
2021-07-06T17:30:22Z
Peter
1
wikitext
text/x-wiki
== Development ==
EJML has been developed most by [https://www.linkedin.com/profile/view?id=9580871 Peter Abeles] in his spare time. Much of the development of EJML was inspired by his frustration with existing libraries at that time. They had very poor performance with small matrices, excessive memory creation/destruction, (arguably) not the best API, and tended to be quickly abandoned by their developers after decided he liked one. The status of Java numerical libraries has improved since then in general. More recently, Graph BLAS features have been added by Florentin Dorre ([https://dl.acm.org/doi/abs/10.1145/3461837.3464627 paper]).
Additional thanks should go towards the [http://ihmc.us Institute for Human Machine Cognition] (IHMC) which encouraged the continued development of EJML and even commissioned the inclusion of the first few complex matrix operations after he had left. [https://www.hebirobotics.com/ HEBI Robotics] sponsored the continued developement of support for sparse matrix operations, a much needed feature.
All the feedback and bug reports from its users have also had a significant influence on this library. Without their encouragement and help it would be less stable and much less flushed out than it is today. The book [http://www.amazon.com/gp/product/0470528338/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0470528338 Fundamentals of Matrix Computations by David S. Watkins] also significantly influence the development of the library in its early days. It is probably the best introduction to to the computational side of linear algebra written so far and includes many important implementation details left out in other books.
== Dependencies ==
EJML is entirely self contained and is only dependent on JUnit for tests.
* http://www.junit.org/
aeedebca1b47faee8f1b0f4ab39d1c62e7e260e5
310
309
2021-07-06T17:31:36Z
Peter
1
wikitext
text/x-wiki
== Development ==
EJML has been developed most by [https://www.linkedin.com/profile/view?id=9580871 Peter Abeles] in his spare time. Much of the development of EJML was inspired by his frustration with existing libraries at that time. They had very poor performance with small matrices, excessive memory creation/destruction, (arguably) not the best API, and tended to be quickly abandoned by their developers after decided he liked one. The status of Java numerical libraries has improved since then in general. More recently, Graph BLAS operations have been added by Florentin Dorre ([https://dl.acm.org/doi/abs/10.1145/3461837.3464627 paper]), filling in an often requested feature.
Additional thanks should go towards the [http://ihmc.us Institute for Human Machine Cognition] (IHMC) which encouraged the continued development of EJML and even commissioned the inclusion of the first few complex matrix operations after he had left. [https://www.hebirobotics.com/ HEBI Robotics] sponsored the continued developement of support for sparse matrix operations, a much needed feature.
All the feedback and bug reports from its users have also had a significant influence on this library. Without their encouragement and help it would be less stable and much less flushed out than it is today. The book [http://www.amazon.com/gp/product/0470528338/ref=as_li_ss_tl?ie=UTF8&tag=ejml-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0470528338 Fundamentals of Matrix Computations by David S. Watkins] also significantly influence the development of the library in its early days. It is probably the best introduction to to the computational side of linear algebra written so far and includes many important implementation details left out in other books.
== Dependencies ==
EJML is entirely self contained and is only dependent on JUnit for tests.
* http://www.junit.org/
f40e1bea56b5e6f31ea6d8afc61bca5efbf3ab10
Users
0
3
311
297
2021-07-06T17:35:06Z
Peter
1
wikitext
text/x-wiki
= Projects which use EJML =
Feel free to add your own project!
* [https://github.com/FlorentinD/GraphBlasInJavaBenchmarks Graph Blas in Java Benchmarks]
* [https://neo4j.com/ Neo4J]'s graph-data-science library.
* [https://www.db.bme.hu/preprints/thesis2018-multidimensional-graph-analysis.pdf Petra Várhegyi's masters thesis on graph analysis]
* [http://wiki.industrial-craft.net Industrial Craft 2] modification for minecraft
* [http://www-lium.univ-lemans.fr/diarization/doku.php/ LIUM_SpkDiarization] is a software dedicated to speaker diarization (ie speaker segmentation and clustering).
* [http://researchers.lille.inria.fr/~freno/JProGraM.html JProGraM]: Library for learning a number of statistical models from data.
* [http://code.google.com/p/gogps/ goGPS]: Improve the positioning accuracy of low-cost GPS devices by RTK technique.
* [http://www-edc.eng.cam.ac.uk/tools/set_visualiser/ Set Visualiser]: Visualises the way that a number of items is classified into one or more categories or sets using Euler diagrams.
* Universal Java Matrix Library (UJML): http://www.ujmp.org/
* Scalalab: http://code.google.com/p/scalalab/
* Java Content Based Image Retrieval (JCBIR): http://code.google.com/p/jcbir/
* JLabGroovy: http://code.google.com/p/jlabgroovy/
* JquantLib (Will be added): http://www.jquantlib.org/
* Matlube: https://github.com/hohonuuli/matlube
* Geometric Regression Library: http://georegression.org/
* BoofCV: Computer Vision Library: http://boofcv.org/
* ICY: bio-imaging: http://www.bioimageanalysis.com/icy/
* JSkills: Java implementation of TrueSkill algorithm https://github.com/nsp/JSkills
* Portfolio applets at http://www.christoph-junge.de/optimizer.php
* Distributed Control Framework (DCF) http://www.i-a-i.com/dcfpro/
* JptView point cloud viewer: http://www.seas.upenn.edu/~aiv/jptview/
* JPrIME Bayesian phylogenetics library: http://code.google.com/p/jprime/
* J-Matrix quantum mechanics scattering https://code.google.com/p/jmatrix/
* DDogleg Numerics: http://ddogleg.org
* Saddle: http://saddle.github.io/doc/index.html
* GDSC ImageJ Plugins: http://www.sussex.ac.uk/gdsc/intranet/microscopy/imagej/gdsc_plugins
* Robot Controller for Humanoid Robots: http://www.ihmc.us/Research/projects/HumanoidRobots/index.html
* Credit Analytics: http://code.google.com/p/creditanalytics
* Spline Library: http://code.google.com/p/splinelibrary - http://www.credit-trader.org/CreditSuite/docs/SplineLibrary_2.2.pdf
* Fixed Point Finder: http://code.google.com/p/rootfinder - http://www.credit-trader.org/CreditSuite/docs/FixedPointFinder_2.2.pdf
* Sensitivity generation scheme in Credit Analytics: http://www.credit-trader.org/CreditSuite/docs/SensitivityGenerator_2.2.pdf
* Stanford CoreNLP: A set of natural language analysis tools: http://nlp.stanford.edu/software/corenlp.shtml
* OpenChrom: Open source software for the mass spectrometric analysis of chromatographic data. https://www.openchrom.net
= Papers That Cite EJML =
* A ton of [https://scholar.google.com/scholar?q=%22efficient+java+matrix+library%22&hl=en&as_sdt=0,5 academic papers]
* [https://dl.acm.org/doi/abs/10.1145/3461837.3464627 Florentin Dörre, Alexander Krause, Dirk Habich, and Martin Junghanns. 2021. A GraphBLAS implementation in pure Java. In Proceedings of the 4th ACM SIGMOD Joint International Workshop on Graph Data Management Experiences & Systems (GRADES) and Network Data Analytics (NDA)]
* Zewdie, Dawit Dawit Habtamu. "Representation discovery in non-parametric reinforcement learning." Diss. Massachusetts Institute of Technology, 2014.
* Sanfilippo, Filippo, et al. "A mapping approach for controlling different maritime cranes and robots using ANN." Mechatronics and Automation (ICMA), 2014 IEEE International Conference on. IEEE, 2014.
* Kushman, Nate, et al. "Learning to automatically solve algebra word problems." ACL (1) (2014): 271-281.
* Stergios Papadimitriou, Seferina Mavroudi, Kostas Theofilatos, and Spiridon Likothanasis, “MATLAB-Like Scripting of Java Scientific Libraries in ScalaLab,” Scientific Programming, vol. 22, no. 3, pp. 187-199, 2014.
* Alberto Castellini, Daniele Paltrinieri, and Vincenzo Manca "MP-GeneticSynth: Inferring Biological Network Regulations from Time Series" Bioinformatics 2014
* Blasinski, H., Bulan, O., & Sharma, G. (2013). Per-Colorant-Channel Color Barcodes for Mobile Applications: An Interference Cancellation Framework.
* Marin, R. C., & Dobre, C. (2013, November). Reaching for the clouds: contextually enhancing smartphones for energy efficiency. In Proceedings of the 2nd ACM workshop on High performance mobile opportunistic systems (pp. 31-38). ACM.
* Oletic, D., Skrapec, M., & Bilas, V. (2013). Monitoring Respiratory Sounds: Compressed Sensing Reconstruction via OMP on Android Smartphone. In Wireless Mobile Communication and Healthcare (pp. 114-121). Springer Berlin Heidelberg.
* Santhiar, Anirudh and Pandita, Omesh and Kanade, Aditya "Discovering Math APIs by Mining Unit Tests" Fundamental Approaches to Software Engineering 2013
* Sanjay K. Boddhu, Robert L. Williams, Edward Wasser, Niranjan Kode, "Increasing Situational Awareness using Smartphones" Proc. SPIE 8389, Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR III, 83891J (May 1, 2012)
* J. A. Álvarez-Bermejo, N. Antequera, R. García-Rubio and J. A. López-Ramos, _"A scalable server for key distribution and its application to accounting,"_ The Journal of Supercomputing, 2012
* Realini E., Yoshida D., Reguzzoni M., Raghavan V., _"Enhanced satellite positioning as a web service with goGPS open source software"_. Applied Geomatics 4(2), 135-142. 2012
* Stergios Papadimitriou, Constantinos Terzidis, Seferina Mavroudi, Spiridon D. Likothanassis: _Exploiting java scientific libraries with the scala language within the scalalab environment._ IET Software 5(6): 543-551 (2011)
* L. T. Lim, B. Ranaivo-Malançon and E. K. Tang. _“Symbiosis Between a Multilingual Lexicon and Translation Example Banks”._ In: Procedia: Social and Behavioral Sciences 27 (2011), pp. 61–69.
* G. Taboada, S. Ramos, R. Expósito, J. Touriño, R. Doallo, _Java in the High Performance Computing arena: Research, practice and experience,_ Science of Computer Programming, 2011.
* http://geomatica.como.polimi.it/presentazioni/Osaka_Summer_goGPS.pdf
* http://www.holger-arndt.de/library/MLOSS2010.pdf
* http://www.ateji.com/px/whitepapers/Ateji%20PX%20MatMult%20Whitepaper%20v1.2.pdf
Note: Slowly working on an EJML paper for publication. About 1/2 way through a first draft.
= On The Web =
* https://softwarerecs.stackexchange.com/questions/51330/sparse-matrix-library-for-java
* https://lessthanoptimal.github.io/Java-Matrix-Benchmark/
* http://java.dzone.com/announcements/introduction-efficient-java
* https://shakthydoss.wordpress.com/2011/01/13/jama-shortcoming/
* Various questions on stackoverflow.com
d12b315c0afbc2b47697479d31f3da443b3b6274
Example Kalman Filter
0
10
312
285
2021-07-07T15:02:57Z
Peter
1
wikitext
text/x-wiki
Here are three examples that demonstrate how a [http://en.wikipedia.org/wiki/Kalman_filter Kalman filter] can be created using different API's in EJML. Each API has different advantages and disadvantages. High level interfaces tend to be easier to use, but sacrifice efficiency. The intent of this article is to illustrate this trend empirically. Runtime performance of each approach is shown below. To see how complex and readable each approach is check out the source code below.
<center>
{| class="wikitable"
! API !! Execution Time (ms)
|-
| SimpleMatrix || 1875
|-
| Operations || 1280
|-
| Equations || 1698
|}
</center>
__TOC__
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.41/examples/src/org/ejml/example/KalmanFilterSimple.java KalmanFilterSimple]
* [https://github.com/lessthanoptimal/ejml/blob/v0.41/examples/src/org/ejml/example/KalmanFilterOperations.java KalmanFilterOperations]
* [https://github.com/lessthanoptimal/ejml/blob/v0.41/examples/src/org/ejml/example/KalmanFilterEquation.java KalmanFilterEquation]
* <disqus>Discuss this example</disqus>
----
'''NOTE:''' While the Kalman filter code below is fully functional and will work well in most applications, it might not be the best. Other variants seek to improve stability and/or avoid the matrix inversion. It's worth point out that some people say you should never invert the matrix in a Kalman filter. There are applications, such as target tracking, where matrix inversion of the innovation covariance is helpful as a preprocessing step.
== SimpleMatrix Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter implemented using SimpleMatrix. The code tends to be easier to
* read and write, but the performance is degraded due to excessive creation/destruction of
* memory and the use of more generic algorithms. This also demonstrates how code can be
* seamlessly implemented using both SimpleMatrix and DMatrixRMaj. This allows code
* to be quickly prototyped or to be written either by novices or experts.
*
* @author Peter Abeles
*/
public class KalmanFilterSimple implements KalmanFilter {
// kinematics description
private SimpleMatrix F, Q, H;
// sytem state estimate
private SimpleMatrix x, P;
@Override public void configure( DMatrixRMaj F, DMatrixRMaj Q, DMatrixRMaj H ) {
this.F = new SimpleMatrix(F);
this.Q = new SimpleMatrix(Q);
this.H = new SimpleMatrix(H);
}
@Override public void setState( DMatrixRMaj x, DMatrixRMaj P ) {
this.x = new SimpleMatrix(x);
this.P = new SimpleMatrix(P);
}
@Override public void predict() {
// x = F x
x = F.mult(x);
// P = F P F' + Q
P = F.mult(P).mult(F.transpose()).plus(Q);
}
@Override public void update( DMatrixRMaj _z, DMatrixRMaj _R ) {
// a fast way to make the matrices usable by SimpleMatrix
SimpleMatrix z = SimpleMatrix.wrap(_z);
SimpleMatrix R = SimpleMatrix.wrap(_R);
// y = z - H x
SimpleMatrix y = z.minus(H.mult(x));
// S = H P H' + R
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
// K = PH'S^(-1)
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
// x = x + Ky
x = x.plus(K.mult(y));
// P = (I-kH)P = P - KHP
P = P.minus(K.mult(H).mult(P));
}
@Override public DMatrixRMaj getState() { return x.getMatrix(); }
@Override public DMatrixRMaj getCovariance() { return P.getMatrix(); }
}
</syntaxhighlight>
== Operations Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter that is implemented using the operations API, which is procedural. Much of the excessive
* memory creation/destruction has been reduced from the KalmanFilterSimple. A specialized solver is
* under to invert the SPD matrix.
*
* @author Peter Abeles
*/
public class KalmanFilterOperations implements KalmanFilter {
// kinematics description
private DMatrixRMaj F, Q, H;
// system state estimate
private DMatrixRMaj x, P;
// these are predeclared for efficiency reasons
private DMatrixRMaj a, b;
private DMatrixRMaj y, S, S_inv, c, d;
private DMatrixRMaj K;
private LinearSolverDense<DMatrixRMaj> solver;
@Override public void configure( DMatrixRMaj F, DMatrixRMaj Q, DMatrixRMaj H ) {
this.F = F;
this.Q = Q;
this.H = H;
int dimenX = F.numCols;
int dimenZ = H.numRows;
a = new DMatrixRMaj(dimenX, 1);
b = new DMatrixRMaj(dimenX, dimenX);
y = new DMatrixRMaj(dimenZ, 1);
S = new DMatrixRMaj(dimenZ, dimenZ);
S_inv = new DMatrixRMaj(dimenZ, dimenZ);
c = new DMatrixRMaj(dimenZ, dimenX);
d = new DMatrixRMaj(dimenX, dimenZ);
K = new DMatrixRMaj(dimenX, dimenZ);
x = new DMatrixRMaj(dimenX, 1);
P = new DMatrixRMaj(dimenX, dimenX);
// covariance matrices are symmetric positive semi-definite
solver = LinearSolverFactory_DDRM.symmPosDef(dimenX);
}
@Override public void setState( DMatrixRMaj x, DMatrixRMaj P ) {
this.x.setTo(x);
this.P.setTo(P);
}
@Override public void predict() {
// x = F x
mult(F, x, a);
x.setTo(a);
// P = F P F' + Q
mult(F, P, b);
multTransB(b, F, P);
addEquals(P, Q);
}
@Override public void update( DMatrixRMaj z, DMatrixRMaj R ) {
// y = z - H x
mult(H, x, y);
subtract(z, y, y);
// S = H P H' + R
mult(H, P, c);
multTransB(c, H, S);
addEquals(S, R);
// K = PH'S^(-1)
if (!solver.setA(S)) throw new RuntimeException("Invert failed");
solver.invert(S_inv);
multTransA(H, S_inv, d);
mult(P, d, K);
// x = x + Ky
mult(K, y, a);
addEquals(x, a);
// P = (I-kH)P = P - (KH)P = P-K(HP)
mult(H, P, c);
mult(K, c, b);
subtractEquals(P, b);
}
@Override public DMatrixRMaj getState() { return x; }
@Override public DMatrixRMaj getCovariance() { return P; }
}
</syntaxhighlight>
== Equations Example ==
<syntaxhighlight lang="java">
/**
* Example of how the equation interface can greatly simplify code
*
* @author Peter Abeles
*/
public class KalmanFilterEquation implements KalmanFilter {
// system state estimate
private DMatrixRMaj x, P;
private Equation eq;
// Storage for precompiled code for predict and update
Sequence predictX, predictP;
Sequence updateY, updateK, updateX, updateP;
@Override public void configure( DMatrixRMaj F, DMatrixRMaj Q, DMatrixRMaj H ) {
int dimenX = F.numCols;
x = new DMatrixRMaj(dimenX, 1);
P = new DMatrixRMaj(dimenX, dimenX);
eq = new Equation();
// Provide aliases between the symbolic variables and matrices we normally interact with
// The names do not have to be the same.
eq.alias(x, "x", P, "P", Q, "Q", F, "F", H, "H");
// Dummy matrix place holder to avoid compiler errors. Will be replaced later on
eq.alias(new DMatrixRMaj(1, 1), "z");
eq.alias(new DMatrixRMaj(1, 1), "R");
// Pre-compile so that it doesn't have to compile it each time it's invoked. More cumbersome
// but for small matrices the overhead is significant
predictX = eq.compile("x = F*x");
predictP = eq.compile("P = F*P*F' + Q");
updateY = eq.compile("y = z - H*x");
updateK = eq.compile("K = P*H'*inv( H*P*H' + R )");
updateX = eq.compile("x = x + K*y");
updateP = eq.compile("P = P-K*(H*P)");
}
@Override public void setState( DMatrixRMaj x, DMatrixRMaj P ) {
this.x.setTo(x);
this.P.setTo(P);
}
@Override public void predict() {
predictX.perform();
predictP.perform();
}
@Override public void update( DMatrixRMaj z, DMatrixRMaj R ) {
// Alias will overwrite the reference to the previous matrices with the same name
eq.alias(z, "z", R, "R");
updateY.perform();
updateK.perform();
updateX.perform();
updateP.perform();
}
@Override public DMatrixRMaj getState() { return x; }
@Override public DMatrixRMaj getCovariance() { return P; }
}
</syntaxhighlight>
502807338ef7312ba53f342de160fe6da275a863
329
312
2023-02-10T15:52:56Z
Peter
1
wikitext
text/x-wiki
Here are three examples that demonstrate how a [http://en.wikipedia.org/wiki/Kalman_filter Kalman filter] can be created using different API's in EJML. Each API has different advantages and disadvantages. High level interfaces tend to be easier to use, but sacrifice efficiency. The intent of this article is to illustrate this trend empirically. Runtime performance of each approach is shown below. To see how complex and readable each approach is check out the source code below.
<center>
{| class="wikitable"
! API !! Execution Time (ms)
|-
| SimpleMatrix || 1875
|-
| Operations || 1280
|-
| Equations || 1698
|}
</center>
__TOC__
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.42/examples/src/org/ejml/example/KalmanFilterSimple.java KalmanFilterSimple]
* [https://github.com/lessthanoptimal/ejml/blob/v0.42/examples/src/org/ejml/example/KalmanFilterOperations.java KalmanFilterOperations]
* [https://github.com/lessthanoptimal/ejml/blob/v0.42/examples/src/org/ejml/example/KalmanFilterEquation.java KalmanFilterEquation]
* <disqus>Discuss this example</disqus>
----
'''NOTE:''' While the Kalman filter code below is fully functional and will work well in most applications, it might not be the best. Other variants seek to improve stability and/or avoid the matrix inversion. It's worth point out that some people say you should never invert the matrix in a Kalman filter. There are applications, such as target tracking, where matrix inversion of the innovation covariance is helpful as a preprocessing step.
== SimpleMatrix Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter implemented using SimpleMatrix. The code tends to be easier to
* read and write, but the performance is degraded due to excessive creation/destruction of
* memory and the use of more generic algorithms. This also demonstrates how code can be
* seamlessly implemented using both SimpleMatrix and DMatrixRMaj. This allows code
* to be quickly prototyped or to be written either by novices or experts.
*
* @author Peter Abeles
*/
public class KalmanFilterSimple implements KalmanFilter {
// kinematics description
private ConstMatrix<SimpleMatrix> F, Q, H;
// sytem state estimate
private SimpleMatrix x, P;
@Override public void configure( DMatrixRMaj F, DMatrixRMaj Q, DMatrixRMaj H ) {
this.F = new SimpleMatrix(F);
this.Q = new SimpleMatrix(Q);
this.H = new SimpleMatrix(H);
}
@Override public void setState( DMatrixRMaj x, DMatrixRMaj P ) {
this.x = new SimpleMatrix(x);
this.P = new SimpleMatrix(P);
}
@Override public void predict() {
// x = F x
x = F.mult(x);
// P = F P F' + Q
P = F.mult(P).mult(F.transpose()).plus(Q);
}
@Override public void update( DMatrixRMaj _z, DMatrixRMaj _R ) {
// a fast way to make the matrices usable by SimpleMatrix
SimpleMatrix z = SimpleMatrix.wrap(_z);
SimpleMatrix R = SimpleMatrix.wrap(_R);
// y = z - H x
ConstMatrix<?> y = z.minus(H.mult(x));
// S = H P H' + R
ConstMatrix<?> S = H.mult(P).mult(H.transpose()).plus(R);
// K = PH'S^(-1)
ConstMatrix<?> K = P.mult(H.transpose().mult(S.invert()));
// x = x + Ky
x = x.plus(K.mult(y));
// P = (I-kH)P = P - KHP
P = P.minus(K.mult(H).mult(P));
}
@Override public DMatrixRMaj getState() { return x.getMatrix(); }
@Override public DMatrixRMaj getCovariance() { return P.getMatrix(); }
}
</syntaxhighlight>
== Operations Example ==
<syntaxhighlight lang="java">
/**
* A Kalman filter that is implemented using the operations API, which is procedural. Much of the excessive
* memory creation/destruction has been reduced from the KalmanFilterSimple. A specialized solver is
* under to invert the SPD matrix.
*
* @author Peter Abeles
*/
public class KalmanFilterOperations implements KalmanFilter {
// kinematics description
private DMatrixRMaj F, Q, H;
// system state estimate
private DMatrixRMaj x, P;
// these are predeclared for efficiency reasons
private DMatrixRMaj a, b;
private DMatrixRMaj y, S, S_inv, c, d;
private DMatrixRMaj K;
private LinearSolverDense<DMatrixRMaj> solver;
@Override public void configure( DMatrixRMaj F, DMatrixRMaj Q, DMatrixRMaj H ) {
this.F = F;
this.Q = Q;
this.H = H;
int dimenX = F.numCols;
int dimenZ = H.numRows;
a = new DMatrixRMaj(dimenX, 1);
b = new DMatrixRMaj(dimenX, dimenX);
y = new DMatrixRMaj(dimenZ, 1);
S = new DMatrixRMaj(dimenZ, dimenZ);
S_inv = new DMatrixRMaj(dimenZ, dimenZ);
c = new DMatrixRMaj(dimenZ, dimenX);
d = new DMatrixRMaj(dimenX, dimenZ);
K = new DMatrixRMaj(dimenX, dimenZ);
x = new DMatrixRMaj(dimenX, 1);
P = new DMatrixRMaj(dimenX, dimenX);
// covariance matrices are symmetric positive semi-definite
solver = LinearSolverFactory_DDRM.symmPosDef(dimenX);
}
@Override public void setState( DMatrixRMaj x, DMatrixRMaj P ) {
this.x.setTo(x);
this.P.setTo(P);
}
@Override public void predict() {
// x = F x
mult(F, x, a);
x.setTo(a);
// P = F P F' + Q
mult(F, P, b);
multTransB(b, F, P);
addEquals(P, Q);
}
@Override public void update( DMatrixRMaj z, DMatrixRMaj R ) {
// y = z - H x
mult(H, x, y);
subtract(z, y, y);
// S = H P H' + R
mult(H, P, c);
multTransB(c, H, S);
addEquals(S, R);
// K = PH'S^(-1)
if (!solver.setA(S)) throw new RuntimeException("Invert failed");
solver.invert(S_inv);
multTransA(H, S_inv, d);
mult(P, d, K);
// x = x + Ky
mult(K, y, a);
addEquals(x, a);
// P = (I-kH)P = P - (KH)P = P-K(HP)
mult(H, P, c);
mult(K, c, b);
subtractEquals(P, b);
}
@Override public DMatrixRMaj getState() { return x; }
@Override public DMatrixRMaj getCovariance() { return P; }
}
</syntaxhighlight>
== Equations Example ==
<syntaxhighlight lang="java">
/**
* Example of how the equation interface can greatly simplify code
*
* @author Peter Abeles
*/
public class KalmanFilterEquation implements KalmanFilter {
// system state estimate
private DMatrixRMaj x, P;
private Equation eq;
// Storage for precompiled code for predict and update
Sequence predictX, predictP;
Sequence updateY, updateK, updateX, updateP;
@Override public void configure( DMatrixRMaj F, DMatrixRMaj Q, DMatrixRMaj H ) {
int dimenX = F.numCols;
x = new DMatrixRMaj(dimenX, 1);
P = new DMatrixRMaj(dimenX, dimenX);
eq = new Equation();
// Provide aliases between the symbolic variables and matrices we normally interact with
// The names do not have to be the same.
eq.alias(x, "x", P, "P", Q, "Q", F, "F", H, "H");
// Dummy matrix place holder to avoid compiler errors. Will be replaced later on
eq.alias(new DMatrixRMaj(1, 1), "z");
eq.alias(new DMatrixRMaj(1, 1), "R");
// Pre-compile so that it doesn't have to compile it each time it's invoked. More cumbersome
// but for small matrices the overhead is significant
predictX = eq.compile("x = F*x");
predictP = eq.compile("P = F*P*F' + Q");
updateY = eq.compile("y = z - H*x");
updateK = eq.compile("K = P*H'*inv( H*P*H' + R )");
updateX = eq.compile("x = x + K*y");
updateP = eq.compile("P = P-K*(H*P)");
}
@Override public void setState( DMatrixRMaj x, DMatrixRMaj P ) {
this.x.setTo(x);
this.P.setTo(P);
}
@Override public void predict() {
predictX.perform();
predictP.perform();
}
@Override public void update( DMatrixRMaj z, DMatrixRMaj R ) {
// Alias will overwrite the reference to the previous matrices with the same name
eq.alias(z, "z", R, "R");
updateY.perform();
updateK.perform();
updateX.perform();
updateP.perform();
}
@Override public DMatrixRMaj getState() { return x; }
@Override public DMatrixRMaj getCovariance() { return P; }
}
</syntaxhighlight>
52ea32c35818060246e945330c78012011b8cf9b
Example Concurrent Operations
0
62
313
287
2021-07-07T15:05:16Z
Peter
1
wikitext
text/x-wiki
Concurrent or Mult Threaded operations are a relatively recent to EJML. EJML has traditionally been focused on single threaded performance but this recently changed when "low hanging fruit" has been converted into threaded code. Not all and in fact most operations don't have threaded variants yet and it is always possible to call code which is purely single threaded. See below for more details.
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.41/examples/src/org/ejml/example/ExampleConcurrent.java ExampleConcurrent.java code]
== Example Code ==
<syntaxhighlight lang="java">
/**
* Concurrent or multi-threaded algorithms are a recent addition to EJML. Classes with concurrent implementations
* can be identified with _MT_ in the class name. For example CommonOps_MT_DDRM will contain concurrent implementations
* of operations such as matrix multiplication for dense row-major algorithms. Not everything has a concurrent
* implementation yet and in some cases entirely new algorithms will need to be implemented.
*
* @author Peter Abeles
*/
public class ExampleConcurrent {
public static void main( String[] args ) {
// Create a few random matrices that we will multiply and decompose
var rand = new Random(0xBEEF);
DMatrixRMaj A = RandomMatrices_DDRM.rectangle(4000, 4000, -1, 1, rand);
DMatrixRMaj B = RandomMatrices_DDRM.rectangle(A.numCols, 1000, -1, 1, rand);
DMatrixRMaj C = new DMatrixRMaj(1, 1);
// First do a concurrent matrix multiply using the default number of threads
System.out.println("Matrix Multiply, threads=" + EjmlConcurrency.getMaxThreads());
UtilEjml.printTime(" ", "Elapsed: ", () -> CommonOps_MT_DDRM.mult(A, B, C));
// Set it to two threads
EjmlConcurrency.setMaxThreads(2);
System.out.println("Matrix Multiply, threads=" + EjmlConcurrency.getMaxThreads());
UtilEjml.printTime(" ", "Elapsed: ", () -> CommonOps_MT_DDRM.mult(A, B, C));
// Then let's compare it against the single thread implementation
System.out.println("Matrix Multiply, Single Thread");
UtilEjml.printTime(" ", "Elapsed: ", () -> CommonOps_DDRM.mult(A, B, C));
// Setting the number of threads to 1 then running am MT implementation actually calls completely different
// code than the regular function calls and will be less efficient. This will probably only be evident on
// small matrices though
// If the future we will provide a way to optionally automatically switch to concurrent implementations
// for larger when calling standard functions.
}
}
</syntaxhighlight>
670aa749700902f96477089676dd7d79fde04370
Example Graph Paths
0
63
314
289
2021-07-07T15:08:10Z
Peter
1
wikitext
text/x-wiki
Many Graph operations can be performed using linear algebra and this connection is the subject of much recent research. EJML now has basic "Graph BLAS" capabilities as this example shows.
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.41/examples/src/org/ejml/example/ExampleGraphPaths.java ExampleGraphPaths.java]
== Example Code ==
<syntaxhighlight lang="java">
/**
* Example including one iteration of the graph traversal algorithm breath-first-search (BFS),
* using different semirings. So following the outgoing relationships for a set of starting nodes.
*
* More about the connection between graphs and linear algebra can be found at:
* https://github.com/GraphBLAS/GraphBLAS-Pointers.
*
* @author Florentin Doerre
*/
public class ExampleGraphPaths {
private static final int NODE_COUNT = 4;
public static void main( String[] args ) {
DMatrixSparseCSC adjacencyMatrix = new DMatrixSparseCSC(NODE_COUNT, 4);
// For the example we will be using the following graph:
// (3)<-[cost: 0.2]-(0)<-[cost: 0.1]->(2)<-[cost: 0.3]-(1)
adjacencyMatrix.set(0, 2, 0.1);
adjacencyMatrix.set(0, 3, 0.2);
adjacencyMatrix.set(2, 0, 0.1);
adjacencyMatrix.set(3, 2, 0.3);
// Semirings are used to redefine + and * f.i. with OR for + and AND for *
DSemiRing lor_land = DSemiRings.OR_AND;
DSemiRing min_times = DSemiRings.MIN_TIMES;
DSemiRing plus_land = new DSemiRing(DMonoids.PLUS, DMonoids.AND);
// sparse Vector (Matrix with one column)
DMatrixSparseCSC startNodes = new DMatrixSparseCSC(1, NODE_COUNT);
// setting the node 0 as the start-node
startNodes.set(0, 0, 1);
DMatrixSparseCSC outputVector = startNodes.createLike();
// Compute which nodes can be reached from the node 0 (disregarding the costs of the relationship)
CommonOpsWithSemiRing_DSCC.mult(startNodes, adjacencyMatrix, outputVector, lor_land, null, null, null);
System.out.println("Node 3 can be reached from node 0: " + (outputVector.get(0, 3) == 1));
System.out.println("Node 1 can be reached from node 0: " + (outputVector.get(0, 1) == 1));
// Add node 3 to the start nodes
startNodes.set(0, 3, 1);
// Find the number of path the nodes can be reached with
CommonOpsWithSemiRing_DSCC.mult(startNodes, adjacencyMatrix, outputVector, plus_land, null, null, null);
System.out.println("The number of start-nodes leading to node 2 is " + (int)outputVector.get(0, 2));
// Find the path with the minimal cost (direct connection from one of the specified starting nodes)
// the calculated cost equals the cost specified in the relationship (as both startNodes have a weight of 1)
// as an alternative you could use the MIN_PLUS semiring to consider the existing cost specified in the startNodes vector
CommonOpsWithSemiRing_DSCC.mult(startNodes, adjacencyMatrix, outputVector, min_times, null, null, null);
System.out.println("The minimal cost to reach the node 2 is " + outputVector.get(0, 2));
}
}
</syntaxhighlight>
38ec8b932b0e6576aa99e7048e419ac6bf484ed1
Manual
0
8
315
286
2021-07-07T15:22:31Z
Peter
1
/* Example Code */
wikitext
text/x-wiki
= The Basics =
Efficient Java Matrix Library (EJML) is a Java library for performing standard linear algebra operations on dense matrices. Typically the list of standard operations is divided up unto basic (addition, subtraction, multiplication, ...etc), decompositions (LU, QR, SVD, ... etc), and solving linear systems. A complete list of its core functionality can be found on the [[Capabilities]] page.
This manual describes how to use and develop an application using EJML. Other questions, like how to build or include it in your project, is provided in the list below. If you have a question which isn't answered or is confusion feel free to post a question on the message board! Instructions on how to use EJML is primarily done in this manual through example, see below. The examples are selected from common real-world problems, such as Kalman filters. Some times the same example is provided in three different formats using one of the three interfaces provided in EJML to help you understand the differences.
* [[Download|Download and Building]]
* [[Frequently Asked Questions|Frequently Asked Questions]]
* [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
EJML is compatible with Java 1.8 and beyond.
== The Interfaces ==
A primary design goal of EJML was to provide users the capability to write both highly optimized code and easy to read/write code. Since it's hard to do this with a single API BoofCV provides three different ways to interact with it.
* [[Procedural]]: You have full access to all of EJML's capabilities, can select individual algorithms, and almost complete control over memory. The downside is it feels a bit like you're programming in assembly and it's tedious to have that much control over memory.
* [[SimpleMatrix]]: An object oriented API that allows you to connect multiple operations together using a flow strategy, which is much easier to read and write. Limited subset of operations are supported and memory is constantly created and destroyed.
* [[Equations]]: Is a symbolic interface that allows you to manipulate matrices in a similar manor to Matlab/Octave. Can be precompiled and won't declare new memory if the input size doesn't change. It's a bit of a black box and the compiler isn't smart enough to pick the most efficient functions.
Example of compute the Kalman gain "K"
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
It's hard to say which interface is the best. If you are dealing with small matrices and need to write highly optimized code then ''Procedural'' is the way to go. For large matrices it doesn't really matter which one you use since the overhead is insignificant compared to the matrix operations. If you want to write something quickly then [[SimpleMatrix]] or [[Equations]] is the way to go. For those of you who are concerned about performance, I recommend coding it up first using SimpleMatrix or Equations then benchmarking to see if that code is a bottleneck. Much easier to debug that way.
[[Performance|Comparison of Interface Runtime Performance]]
= Tutorials =
* [[Matlab to EJML|Matlab to EJML]]
* [[Tutorial Complex|Complex]]
* [[Solving Linear Systems|Solving Linear Systems]]
* [[Matrix Decompositions|Matrix Decompositions]]
* [[Random matrices, Matrix Features, and Matrix Norms]]
* [[Extract and Insert|Extracting and Inserting submatrices and vectors]]
* [[Input and Output|Matrix Input/Output]]
* [[Unit Testing]]
= Example Code =
The follow are code examples of common linear algebra problems intended to demonstrate different parts of EJML. In the table below it indicates which interface or interfaces the example uses.
{| class="wikitable" border="1" |
! Name !! Procedural !! SimpleMatrix !! Equations
|-
| [[Example Kalman Filter|Kalman Filter]] || X || X || X
|-
| [[Example Sparse Matrices|Sparse Matrix Basics]] || X || ||
|-
| [[Example Levenberg-Marquardt|Levenberg-Marquardt]] || X || ||
|-
| [[Example Principal Component Analysis|Principal Component Analysis]] || X || ||
|-
| [[Example Polynomial Fitting|Polynomial Fitting]] || X || ||
|-
| [[Example Polynomial Roots|Polynomial Roots]] || X || ||
|-
| [[Example Customizing Equations|Customizing Equations]] || || || X
|-
| [[Example Customizing SimpleMatrix|Customizing SimpleMatrix]] || || X ||
|-
| [[Example Fixed Sized Matrices|Fixed Sized Matrices]] || X || ||
|-
| [[Example Complex Math|Complex Math]] || X || ||
|-
| [[Example Complex Matrices|Complex Matrices]] || X || ||
|-
| [[Example Concurrent Operations|Concurrent Operations]] || X || ||
|-
| [[Example Graph Paths|(GraphBLAS) Graph Paths]] || X || ||
|-
| [[Example Masked Triangle Count|(GraphBLAS) Masked Triangle Count]] || X || ||
|-
| [[Example Large Dense Matrices|Optimizing Large Dense]] || X || ||
|}
= External References =
Want to learn more about how EJML works to write more effective code and employ more advanced techniques? Understand where EJML's logo comes from? The following books are recommended reading and made EJML's early development possible.
* Best introduction to the subject that balances clarity without sacrificing important implementation details:
** Fundamentals of Matrix Computations by David S. Watkins
* Classic reference book that tersely covers hundreds of algorithms
** Matrix Computations by G. Golub and C. Van Loan
* Direct Methods for Sparse Linear Systems by Timothy A. Davis
** Covers the sparse algorithms used in EJML
* Popular book on linear algebra
** Linear Algebra and Its Applications by Gilbert Strang
ec557e7519218bf475c0f831647b86b61bcffe4c
Example Masked Triangle Count
0
66
316
2021-07-07T15:23:21Z
Peter
1
Created page with "Many Graph operations can be performed using linear algebra and this connection is the subject of much recent research. EJML now has basic "Graph BLAS" capabilities as this ex..."
wikitext
text/x-wiki
Many Graph operations can be performed using linear algebra and this connection is the subject of much recent research. EJML now has basic "Graph BLAS" capabilities as this example shows.
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.41/examples/src/org/ejml/example/ExampleMaskedTriangleCount.java ExampleMaskedTriangleCount.java]
== Example Code ==
<syntaxhighlight lang="java">
/**
* Example using masked matrix multiplication to count the triangles in a graph.
* Triangle counting is used to detect communities in graphs and often used to analyse social graphs.
*
* More about the connection between graphs and linear algebra can be found at:
* https://github.com/GraphBLAS/GraphBLAS-Pointers.
*
* @author Florentin Doerre
*/
public class ExampleMaskedTriangleCount {
public static void main( String[] args ) {
// For the example we will be using the following graph:
// (0)--(1)--(2)--(0), (2)--(3)--(4)--(2), (5)
var adjacencyMatrix = new DMatrixSparseCSC(6, 6, 24);
adjacencyMatrix.set(0, 1, 1);
adjacencyMatrix.set(0, 2, 1);
adjacencyMatrix.set(1, 2, 1);
adjacencyMatrix.set(2, 3, 1);
adjacencyMatrix.set(2, 4, 1);
adjacencyMatrix.set(3, 4, 1);
// Triangle Count is defined over undirected graphs, therefore we make matrix symmetric (i.e. undirected)
adjacencyMatrix.copy().createCoordinateIterator().forEachRemaining(v -> adjacencyMatrix.set(v.col, v.row, v.value));
// In a graph context mxm computes all path of length 2 (a->b->c).
// But, for triangles we are only interested in the "closed" path which form a triangle (a->b->c->a).
// To avoid computing irrelevant paths, we can use the adjacency matrix as the mask, which assures (a->c) exists.
var mask = DMaskFactory.builder(adjacencyMatrix, true).build();
var triangleMatrix = CommonOpsWithSemiRing_DSCC.mult(adjacencyMatrix, adjacencyMatrix, null, DSemiRings.PLUS_TIMES, mask, null, null);
// To compute the triangles per vertex we calculate the sum per each row.
// For the correct count, we need to divide the count by 2 as each triangle was counted twice (a--b--c, and a--c--b)
var trianglesPerVertex = CommonOps_DSCC.reduceRowWise(triangleMatrix, 0, Double::sum, null);
CommonOps_DDRM.apply(trianglesPerVertex, v -> v/2);
System.out.println("Triangles including vertex 0 " + trianglesPerVertex.get(0));
System.out.println("Triangles including vertex 2 " + trianglesPerVertex.get(2));
System.out.println("Triangles including vertex 5 " + trianglesPerVertex.get(5));
// Note: To avoid counting each triangle twice, the lower triangle over the adjacency matrix can be used TRI<A> = A * L
}
}
</syntaxhighlight>
7d796a94c6a3b0ddb8f9a072c60a05583399adb2
Example Sparse Matrices
0
60
317
284
2021-07-07T15:25:22Z
Peter
1
wikitext
text/x-wiki
Support for sparse matrices has recently been added to EJML. It supports many but not all of the standard operations that are supported for dense matrics. The code below shows the basics of working with a sparse matrix. In some situations the speed improvement of using a sparse matrix can be substantial. Do note that if your system isn't sparse enough or if its structure isn't advantageous it could run even slower using sparse operations!
<center>
{| class="wikitable"
! Type !! Execution Time (ms)
|-
| Dense || 12660
|-
| Sparse || 1642
|}
</center>
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.41/examples/src/org/ejml/example/ExampleSparseMatrix.java ExampleSparseMatrix.java]
== Sparse Matrix Example ==
<syntaxhighlight lang="java">
/**
* Example showing how to construct and solve a linear system using sparse matrices
*
* @author Peter Abeles
*/
public class ExampleSparseMatrix {
public static int ROWS = 100000;
public static int COLS = 1000;
public static int XCOLS = 1;
public static void main( String[] args ) {
Random rand = new Random(234);
// easy to work with sparse format, but hard to do computations with
// NOTE: It is very important to you set 'initLength' to the actual number of elements in the final array
// If you don't it will be forced to thrash memory as it grows its internal data structures.
// Failure to heed this advice will make construction of large matrices 4x slower and use 2x more memory
DMatrixSparseTriplet work = new DMatrixSparseTriplet(5, 4, 5);
work.addItem(0, 1, 1.2);
work.addItem(3, 0, 3);
work.addItem(1, 1, 22.21234);
work.addItem(2, 3, 6);
// convert into a format that's easier to perform math with
DMatrixSparseCSC Z = DConvertMatrixStruct.convert(work, (DMatrixSparseCSC)null);
// print the matrix to standard out in two different formats
Z.print();
System.out.println();
Z.printNonZero();
System.out.println();
// Create a large matrix that is 5% filled
DMatrixSparseCSC A = RandomMatrices_DSCC.rectangle(ROWS, COLS, (int)(ROWS*COLS*0.05), rand);
// large vector that is 70% filled
DMatrixSparseCSC x = RandomMatrices_DSCC.rectangle(COLS, XCOLS, (int)(XCOLS*COLS*0.7), rand);
System.out.println("Done generating random matrices");
// storage for the initial solution
DMatrixSparseCSC y = new DMatrixSparseCSC(ROWS, XCOLS, 0);
DMatrixSparseCSC z = new DMatrixSparseCSC(ROWS, XCOLS, 0);
// To demonstration how to perform sparse math let's multiply:
// y=A*x
// Optional storage is set to null so that it will declare it internally
long before = System.currentTimeMillis();
IGrowArray workA = new IGrowArray(A.numRows);
DGrowArray workB = new DGrowArray(A.numRows);
for (int i = 0; i < 100; i++) {
CommonOps_DSCC.mult(A, x, y, workA, workB);
CommonOps_DSCC.add(1.5, y, 0.75, y, z, workA, workB);
}
long after = System.currentTimeMillis();
System.out.println("norm = " + NormOps_DSCC.fastNormF(y) + " sparse time = " + (after - before) + " ms");
DMatrixRMaj Ad = DConvertMatrixStruct.convert(A, (DMatrixRMaj)null);
DMatrixRMaj xd = DConvertMatrixStruct.convert(x, (DMatrixRMaj)null);
DMatrixRMaj yd = new DMatrixRMaj(y.numRows, y.numCols);
DMatrixRMaj zd = new DMatrixRMaj(y.numRows, y.numCols);
before = System.currentTimeMillis();
for (int i = 0; i < 100; i++) {
CommonOps_DDRM.mult(Ad, xd, yd);
CommonOps_DDRM.add(1.5, yd, 0.75, yd, zd);
}
after = System.currentTimeMillis();
System.out.println("norm = " + NormOps_DDRM.fastNormF(yd) + " dense time = " + (after - before) + " ms");
}
}
</syntaxhighlight>
36bc179d3fcac99c88ec9c85256d1a730d704a11
330
317
2023-02-10T15:56:54Z
Peter
1
wikitext
text/x-wiki
Support for sparse matrices has recently been added to EJML. It supports many but not all of the standard operations that are supported for dense matrics. The code below shows the basics of working with a sparse matrix. In some situations the speed improvement of using a sparse matrix can be substantial. Do note that if your system isn't sparse enough or if its structure isn't advantageous it could run even slower using sparse operations!
<center>
{| class="wikitable"
! Type !! Execution Time (ms)
|-
| Dense || 12660
|-
| Sparse || 1642
|}
</center>
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.41/examples/src/org/ejml/example/ExampleSparseMatrix.java ExampleSparseMatrix.java]
== Sparse Matrix Example ==
<syntaxhighlight lang="java">
/**
* Example showing how to construct and solve a linear system using sparse matrices
*
* @author Peter Abeles
*/
public class ExampleSparseMatrix {
public static int ROWS = 100000;
public static int COLS = 1000;
public static int XCOLS = 1;
public static void main( String[] args ) {
Random rand = new Random(234);
// easy to work with sparse format, but hard to do computations with
// NOTE: It is very important to you set 'initLength' to the actual number of elements in the final array
// If you don't it will be forced to thrash memory as it grows its internal data structures.
// Failure to heed this advice will make construction of large matrices 4x slower and use 2x more memory
var work = new DMatrixSparseTriplet(5, 4, 5);
work.addItem(0, 1, 1.2);
work.addItem(3, 0, 3);
work.addItem(1, 1, 22.21234);
work.addItem(2, 3, 6);
// convert into a format that's easier to perform math with
DMatrixSparseCSC Z = DConvertMatrixStruct.convert(work, (DMatrixSparseCSC)null);
// print the matrix to standard out in two different formats
Z.print();
System.out.println();
Z.printNonZero();
System.out.println();
// Create a large matrix that is 5% filled
DMatrixSparseCSC A = RandomMatrices_DSCC.rectangle(ROWS, COLS, (int)(ROWS*COLS*0.05), rand);
// large vector that is 70% filled
DMatrixSparseCSC x = RandomMatrices_DSCC.rectangle(COLS, XCOLS, (int)(XCOLS*COLS*0.7), rand);
System.out.println("Done generating random matrices");
// storage for the initial solution
var y = new DMatrixSparseCSC(ROWS, XCOLS, 0);
var z = new DMatrixSparseCSC(ROWS, XCOLS, 0);
// To demonstration how to perform sparse math let's multiply:
// y=A*x
// Optional storage is set to null so that it will declare it internally
long before = System.currentTimeMillis();
var workA = new IGrowArray(A.numRows);
var workB = new DGrowArray(A.numRows);
for (int i = 0; i < 100; i++) {
CommonOps_DSCC.mult(A, x, y, workA, workB);
CommonOps_DSCC.add(1.5, y, 0.75, y, z, workA, workB);
}
long after = System.currentTimeMillis();
System.out.println("norm = " + NormOps_DSCC.fastNormF(y) + " sparse time = " + (after - before) + " ms");
DMatrixRMaj Ad = DConvertMatrixStruct.convert(A, (DMatrixRMaj)null);
DMatrixRMaj xd = DConvertMatrixStruct.convert(x, (DMatrixRMaj)null);
var yd = new DMatrixRMaj(y.numRows, y.numCols);
var zd = new DMatrixRMaj(y.numRows, y.numCols);
before = System.currentTimeMillis();
for (int i = 0; i < 100; i++) {
CommonOps_DDRM.mult(Ad, xd, yd);
CommonOps_DDRM.add(1.5, yd, 0.75, yd, zd);
}
after = System.currentTimeMillis();
System.out.println("norm = " + NormOps_DDRM.fastNormF(yd) + " dense time = " + (after - before) + " ms");
}
}
</syntaxhighlight>
1fe9d070eead03a835143cae397fc6b6370a0cc9
Example Levenberg-Marquardt
0
12
318
261
2021-07-07T15:26:21Z
Peter
1
wikitext
text/x-wiki
Levenberg-Marquardt (LM) is a popular non-linear optimization algorithm. This example demonstrate how a basic implementation of Levenberg-Marquardt can be created using EJML's [[Procedural|procedural]] interface. Unnecessary allocation of new memory is avoided by reshaping matrices. When a matrix is reshaped its width and height is changed but new memory is not declared unless the new shape requires more memory than is available.
LM works by being provided a function which computes the residual error. Residual error is defined has the difference between the predicted output and the actual observed output, e.g. f(x)-y. Optimization works
by finding a set of parameters which minimize the magnitude of the residuals based on the F2-norm.
'''Note:''' This is a simple straight forward implementation of Levenberg-Marquardt and is not as robust as Minpack's implementation. If you are looking for a robust non-linear least-squares minimization library in Java check out [http://ddogleg.org DDogleg].
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.41/examples/src/org/ejml/example/LevenbergMarquardt.java LevenbergMarquardt.java code]
* <disqus>Discuss this example</disqus>
== Example Code ==
<syntaxhighlight lang="java">
/**
* <p>
* This is a straight forward implementation of the Levenberg-Marquardt (LM) algorithm. LM is used to minimize
* non-linear cost functions:<br>
* <br>
* S(P) = Sum{ i=1:m , [y<sub>i</sub> - f(x<sub>i</sub>,P)]<sup>2</sup>}<br>
* <br>
* where P is the set of parameters being optimized.
* </p>
*
* <p>
* In each iteration the parameters are updated using the following equations:<br>
* <br>
* P<sub>i+1</sub> = (H + λ I)<sup>-1</sup> d <br>
* d = (1/N) Sum{ i=1..N , (f(x<sub>i</sub>;P<sub>i</sub>) - y<sub>i</sub>) * jacobian(:,i) } <br>
* H = (1/N) Sum{ i=1..N , jacobian(:,i) * jacobian(:,i)<sup>T</sup> }
* </p>
* <p>
* Whenever possible the allocation of new memory is avoided. This is accomplished by reshaping matrices.
* A matrix that is reshaped won't grow unless the new shape requires more memory than it has available.
* </p>
*
* @author Peter Abeles
*/
public class LevenbergMarquardt {
// Convergence criteria
private int maxIterations = 100;
private double ftol = 1e-12;
private double gtol = 1e-12;
// how much the numerical jacobian calculation perturbs the parameters by.
// In better implementation there are better ways to compute this delta. See Numerical Recipes.
private final static double DELTA = 1e-8;
// Dampening. Larger values means it's more like gradient descent
private double initialLambda;
// the function that is optimized
private ResidualFunction function;
// the optimized parameters and associated costs
private DMatrixRMaj candidateParameters = new DMatrixRMaj(1, 1);
private double initialCost;
private double finalCost;
// used by matrix operations
private DMatrixRMaj g = new DMatrixRMaj(1, 1); // gradient
private DMatrixRMaj H = new DMatrixRMaj(1, 1); // Hessian approximation
private DMatrixRMaj Hdiag = new DMatrixRMaj(1, 1);
private DMatrixRMaj negativeStep = new DMatrixRMaj(1, 1);
// variables used by the numerical jacobian algorithm
private DMatrixRMaj temp0 = new DMatrixRMaj(1, 1);
private DMatrixRMaj temp1 = new DMatrixRMaj(1, 1);
// used when computing d and H variables
private DMatrixRMaj residuals = new DMatrixRMaj(1, 1);
// Where the numerical Jacobian is stored.
private DMatrixRMaj jacobian = new DMatrixRMaj(1, 1);
public double getInitialCost() {
return initialCost;
}
public double getFinalCost() {
return finalCost;
}
/**
* @param initialLambda Initial value of dampening parameter. Try 1 to start
*/
public LevenbergMarquardt( double initialLambda ) {
this.initialLambda = initialLambda;
}
/**
* Specifies convergence criteria
*
* @param maxIterations Maximum number of iterations
* @param ftol convergence based on change in function value. try 1e-12
* @param gtol convergence based on residual magnitude. Try 1e-12
*/
public void setConvergence( int maxIterations, double ftol, double gtol ) {
this.maxIterations = maxIterations;
this.ftol = ftol;
this.gtol = gtol;
}
/**
* Finds the best fit parameters.
*
* @param function The function being optimized
* @param parameters (Input/Output) initial parameter estimate and storage for optimized parameters
* @return true if it succeeded and false if it did not.
*/
public boolean optimize( ResidualFunction function, DMatrixRMaj parameters ) {
configure(function, parameters.getNumElements());
// save the cost of the initial parameters so that it knows if it improves or not
double previousCost = initialCost = cost(parameters);
// iterate until the difference between the costs is insignificant
double lambda = initialLambda;
// if it should recompute the Jacobian in this iteration or not
boolean computeHessian = true;
for (int iter = 0; iter < maxIterations; iter++) {
if (computeHessian) {
// compute some variables based on the gradient
computeGradientAndHessian(parameters);
computeHessian = false;
// check for convergence using gradient test
boolean converged = true;
for (int i = 0; i < g.getNumElements(); i++) {
if (Math.abs(g.data[i]) > gtol) {
converged = false;
break;
}
}
if (converged)
return true;
}
// H = H + lambda*I
for (int i = 0; i < H.numRows; i++) {
H.set(i, i, Hdiag.get(i) + lambda);
}
// In robust implementations failure to solve is handled much better
if (!CommonOps_DDRM.solve(H, g, negativeStep)) {
return false;
}
// compute the candidate parameters
CommonOps_DDRM.subtract(parameters, negativeStep, candidateParameters);
double cost = cost(candidateParameters);
if (cost <= previousCost) {
// the candidate parameters produced better results so use it
computeHessian = true;
parameters.setTo(candidateParameters);
// check for convergence
// ftol <= (cost(k) - cost(k+1))/cost(k)
boolean converged = ftol*previousCost >= previousCost - cost;
previousCost = cost;
lambda /= 10.0;
if (converged) {
return true;
}
} else {
lambda *= 10.0;
}
}
finalCost = previousCost;
return true;
}
/**
* Performs sanity checks on the input data and reshapes internal matrices. By reshaping
* a matrix it will only declare new memory when needed.
*/
protected void configure( ResidualFunction function, int numParam ) {
this.function = function;
int numFunctions = function.numFunctions();
// reshaping a matrix means that new memory is only declared when needed
candidateParameters.reshape(numParam, 1);
g.reshape(numParam, 1);
H.reshape(numParam, numParam);
negativeStep.reshape(numParam, 1);
// Normally these variables are thought of as row vectors, but it works out easier if they are column
temp0.reshape(numFunctions, 1);
temp1.reshape(numFunctions, 1);
residuals.reshape(numFunctions, 1);
jacobian.reshape(numFunctions, numParam);
}
/**
* Computes the d and H parameters.
*
* d = J'*(f(x)-y) <--- that's also the gradient
* H = J'*J
*/
private void computeGradientAndHessian( DMatrixRMaj param ) {
// residuals = f(x) - y
function.compute(param, residuals);
computeNumericalJacobian(param, jacobian);
CommonOps_DDRM.multTransA(jacobian, residuals, g);
CommonOps_DDRM.multTransA(jacobian, jacobian, H);
CommonOps_DDRM.extractDiag(H, Hdiag);
}
/**
* Computes the "cost" for the parameters given.
*
* cost = (1/N) Sum (f(x) - y)^2
*/
private double cost( DMatrixRMaj param ) {
function.compute(param, residuals);
double error = NormOps_DDRM.normF(residuals);
return error*error/(double)residuals.numRows;
}
/**
* Computes a simple numerical Jacobian.
*
* @param param (input) The set of parameters that the Jacobian is to be computed at.
* @param jacobian (output) Where the jacobian will be stored
*/
protected void computeNumericalJacobian( DMatrixRMaj param,
DMatrixRMaj jacobian ) {
double invDelta = 1.0/DELTA;
function.compute(param, temp0);
// compute the jacobian by perturbing the parameters slightly
// then seeing how it effects the results.
for (int i = 0; i < param.getNumElements(); i++) {
param.data[i] += DELTA;
function.compute(param, temp1);
// compute the difference between the two parameters and divide by the delta
// temp1 = (temp1 - temp0)/delta
CommonOps_DDRM.add(invDelta, temp1, -invDelta, temp0, temp1);
// copy the results into the jacobian matrix
// J(i,:) = temp1
CommonOps_DDRM.insert(temp1, jacobian, 0, i);
param.data[i] -= DELTA;
}
}
/**
* The function that is being optimized. Returns the residual. f(x) - y
*/
public interface ResidualFunction {
/**
* Computes the residual vector given the set of input parameters
* Function which goes from N input to M outputs
*
* @param param (Input) N by 1 parameter vector
* @param residual (Output) M by 1 output vector to store the residual = f(x)-y
*/
void compute( DMatrixRMaj param, DMatrixRMaj residual );
/**
* Number of functions in output
*
* @return function count
*/
int numFunctions();
}
}
</syntaxhighlight>
5a9ad0d4f247d0dc30f0a675ae415f0aafb5396f
Example Large Dense Matrices
0
64
319
291
2021-07-07T15:27:29Z
Peter
1
wikitext
text/x-wiki
Different approaches are required when writing high performance dense matrix operations for large matrices. For the most part, EJML will automatically switch to using these different approaches. A key parameter that needs to be tuned for specific systems is block size. It can also make sense to work directly with block matrices instead of assuming EJML does the best for your system.
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.41/examples/src/org/ejml/example/OptimizingLargeMatrixPerformance.java OptimizingLargeMatrixPerformance.java]
== Example Code ==
<syntaxhighlight lang="java">
/**
* For many operations EJML provides block matrix support. These block or tiled matrices are designed to reduce
* the number of cache misses which can kill performance when working on large matrices. A critical tuning parameter
* is the block size and this is system specific. The example below shows you how this parameter can be optimized.
*
* @author Peter Abeles
*/
public class OptimizingLargeMatrixPerformance {
public static void main( String[] args ) {
// Create larger matrices to experiment with
var rand = new Random(0xBEEF);
DMatrixRMaj A = RandomMatrices_DDRM.rectangle(3000, 3000, -1, 1, rand);
DMatrixRMaj B = A.copy();
DMatrixRMaj C = A.createLike();
// Since we are dealing with larger matrices let's use the concurrent implementation. By default
UtilEjml.printTime("Row-Major Multiplication:", () -> CommonOps_MT_DDRM.mult(A, B, C));
// Converts A into a block matrix and creates a new matrix while leaving A unmodified
DMatrixRBlock Ab = MatrixOps_DDRB.convert(A);
// Converts A into a block matrix, but modifies it's internal array inplace. The returned block matrix
// will share the same data array as the input. Much more memory efficient, but you need to be careful.
DMatrixRBlock Bb = MatrixOps_DDRB.convertInplace(B, null, null);
DMatrixRBlock Cb = Ab.createLike();
// Since we are dealing with larger matrices let's use the concurrent implementation. By default
UtilEjml.printTime("Block Multiplication: ", () -> MatrixOps_MT_DDRB.mult(Ab, Bb, Cb));
// Can we make this faster? Probably by adjusting the block size. This is system dependent so let's
// try a range of values
int defaultBlockWidth = EjmlParameters.BLOCK_WIDTH;
System.out.println("Default Block Size: " + defaultBlockWidth);
for (int block : new int[]{10, 20, 30, 50, 70, 100, 140, 200, 500}) {
EjmlParameters.BLOCK_WIDTH = block;
// Need to create the block matrices again since we changed the block size
DMatrixRBlock Ac = MatrixOps_DDRB.convert(A);
DMatrixRBlock Bc = MatrixOps_DDRB.convert(B);
DMatrixRBlock Cc = Ac.createLike();
UtilEjml.printTime("Block " + EjmlParameters.BLOCK_WIDTH + ": ", () -> MatrixOps_MT_DDRB.mult(Ac, Bc, Cc));
}
// On my system the optimal block size is around 100 and has an improvement of about 5%
// On some architectures the improvement can be substantial in others the default value is very reasonable
// Some decompositions will switch to a block format automatically. Matrix multiplication might in the
// future and others too. The main reason this hasn't happened for it to be memory efficient it would
// need to modify then undo the modification for input matrices which would be very confusion if you're
// writing concurrent code.
}
}
</syntaxhighlight>
0507b444b8b5fb3565ce975b1fb6906a7da2720b
Example Polynomial Fitting
0
14
320
237
2021-07-07T15:29:51Z
Peter
1
wikitext
text/x-wiki
In this example it is shown how EJML can be used to fit a polynomial of arbitrary degree to a set of data. The key concepts shown here are; 1) how to create a linear using LinearSolverFactory, 2) use an adjustable linear solver, 3) and effective matrix reshaping. This is all done using the procedural interface.
First a best fit polynomial is fit to a set of data and then a outliers are removed from the observation set and the coefficients recomputed. Outliers are removed efficiently using an adjustable solver that does not resolve the whole system again.
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.41/examples/src/org/ejml/example/PolynomialFit.java PolynomialFit.java source code]
* <disqus>Discuss this example</disqus>
= PolynomialFit Example Code =
<syntaxhighlight lang="java">
/**
* <p>
* This example demonstrates how a polynomial can be fit to a set of data. This is done by
* using a least squares solver that is adjustable. By using an adjustable solver elements
* can be inexpensively removed and the coefficients recomputed. This is much less expensive
* than resolving the whole system from scratch.
* </p>
* <p>
* The following is demonstrated:<br>
* <ol>
* <li>Creating a solver using LinearSolverFactory</li>
* <li>Using an adjustable solver</li>
* <li>reshaping</li>
* </ol>
*
* @author Peter Abeles
*/
public class PolynomialFit {
// Vandermonde matrix
DMatrixRMaj A;
// matrix containing computed polynomial coefficients
DMatrixRMaj coef;
// observation matrix
DMatrixRMaj y;
// solver used to compute
AdjustableLinearSolver_DDRM solver;
/**
* Constructor.
*
* @param degree The polynomial's degree which is to be fit to the observations.
*/
public PolynomialFit( int degree ) {
coef = new DMatrixRMaj(degree + 1, 1);
A = new DMatrixRMaj(1, degree + 1);
y = new DMatrixRMaj(1, 1);
// create a solver that allows elements to be added or removed efficiently
solver = LinearSolverFactory_DDRM.adjustable();
}
/**
* Returns the computed coefficients
*
* @return polynomial coefficients that best fit the data.
*/
public double[] getCoef() {
return coef.data;
}
/**
* Computes the best fit set of polynomial coefficients to the provided observations.
*
* @param samplePoints where the observations were sampled.
* @param observations A set of observations.
*/
public void fit( double[] samplePoints, double[] observations ) {
// Create a copy of the observations and put it into a matrix
y.reshape(observations.length, 1, false);
System.arraycopy(observations, 0, y.data, 0, observations.length);
// reshape the matrix to avoid unnecessarily declaring new memory
// save values is set to false since its old values don't matter
A.reshape(y.numRows, coef.numRows, false);
// set up the A matrix
for (int i = 0; i < observations.length; i++) {
double obs = 1;
for (int j = 0; j < coef.numRows; j++) {
A.set(i, j, obs);
obs *= samplePoints[i];
}
}
// process the A matrix and see if it failed
if (!solver.setA(A))
throw new RuntimeException("Solver failed");
// solver the the coefficients
solver.solve(y, coef);
}
/**
* Removes the observation that fits the model the worst and recomputes the coefficients.
* This is done efficiently by using an adjustable solver. Often times the elements with
* the largest errors are outliers and not part of the system being modeled. By removing them
* a more accurate set of coefficients can be computed.
*/
public void removeWorstFit() {
// find the observation with the most error
int worstIndex = -1;
double worstError = -1;
for (int i = 0; i < y.numRows; i++) {
double predictedObs = 0;
for (int j = 0; j < coef.numRows; j++) {
predictedObs += A.get(i, j)*coef.get(j, 0);
}
double error = Math.abs(predictedObs - y.get(i, 0));
if (error > worstError) {
worstError = error;
worstIndex = i;
}
}
// nothing left to remove, so just return
if (worstIndex == -1)
return;
// remove that observation
removeObservation(worstIndex);
// update A
solver.removeRowFromA(worstIndex);
// solve for the parameters again
solver.solve(y, coef);
}
/**
* Removes an element from the observation matrix.
*
* @param index which element is to be removed
*/
private void removeObservation( int index ) {
final int N = y.numRows - 1;
final double[] d = y.data;
// shift
for (int i = index; i < N; i++) {
d[i] = d[i + 1];
}
y.numRows--;
}
}
</syntaxhighlight>
17ff4b1c877860d8f082a45e736efbfaeaa5ee73
321
320
2021-07-07T15:30:10Z
Peter
1
wikitext
text/x-wiki
In this example it is shown how EJML can be used to fit a polynomial of arbitrary degree to a set of data. The key concepts shown here are; 1) how to create a linear using LinearSolverFactory, 2) use an adjustable linear solver, 3) and effective matrix reshaping. This is all done using the procedural interface.
First a best fit polynomial is fit to a set of data and then a outliers are removed from the observation set and the coefficients recomputed. Outliers are removed efficiently using an adjustable solver that does not resolve the whole system again.
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.41/examples/src/org/ejml/example/PolynomialFit.java PolynomialFit.java source code]
= PolynomialFit Example Code =
<syntaxhighlight lang="java">
/**
* <p>
* This example demonstrates how a polynomial can be fit to a set of data. This is done by
* using a least squares solver that is adjustable. By using an adjustable solver elements
* can be inexpensively removed and the coefficients recomputed. This is much less expensive
* than resolving the whole system from scratch.
* </p>
* <p>
* The following is demonstrated:<br>
* <ol>
* <li>Creating a solver using LinearSolverFactory</li>
* <li>Using an adjustable solver</li>
* <li>reshaping</li>
* </ol>
*
* @author Peter Abeles
*/
public class PolynomialFit {
// Vandermonde matrix
DMatrixRMaj A;
// matrix containing computed polynomial coefficients
DMatrixRMaj coef;
// observation matrix
DMatrixRMaj y;
// solver used to compute
AdjustableLinearSolver_DDRM solver;
/**
* Constructor.
*
* @param degree The polynomial's degree which is to be fit to the observations.
*/
public PolynomialFit( int degree ) {
coef = new DMatrixRMaj(degree + 1, 1);
A = new DMatrixRMaj(1, degree + 1);
y = new DMatrixRMaj(1, 1);
// create a solver that allows elements to be added or removed efficiently
solver = LinearSolverFactory_DDRM.adjustable();
}
/**
* Returns the computed coefficients
*
* @return polynomial coefficients that best fit the data.
*/
public double[] getCoef() {
return coef.data;
}
/**
* Computes the best fit set of polynomial coefficients to the provided observations.
*
* @param samplePoints where the observations were sampled.
* @param observations A set of observations.
*/
public void fit( double[] samplePoints, double[] observations ) {
// Create a copy of the observations and put it into a matrix
y.reshape(observations.length, 1, false);
System.arraycopy(observations, 0, y.data, 0, observations.length);
// reshape the matrix to avoid unnecessarily declaring new memory
// save values is set to false since its old values don't matter
A.reshape(y.numRows, coef.numRows, false);
// set up the A matrix
for (int i = 0; i < observations.length; i++) {
double obs = 1;
for (int j = 0; j < coef.numRows; j++) {
A.set(i, j, obs);
obs *= samplePoints[i];
}
}
// process the A matrix and see if it failed
if (!solver.setA(A))
throw new RuntimeException("Solver failed");
// solver the the coefficients
solver.solve(y, coef);
}
/**
* Removes the observation that fits the model the worst and recomputes the coefficients.
* This is done efficiently by using an adjustable solver. Often times the elements with
* the largest errors are outliers and not part of the system being modeled. By removing them
* a more accurate set of coefficients can be computed.
*/
public void removeWorstFit() {
// find the observation with the most error
int worstIndex = -1;
double worstError = -1;
for (int i = 0; i < y.numRows; i++) {
double predictedObs = 0;
for (int j = 0; j < coef.numRows; j++) {
predictedObs += A.get(i, j)*coef.get(j, 0);
}
double error = Math.abs(predictedObs - y.get(i, 0));
if (error > worstError) {
worstError = error;
worstIndex = i;
}
}
// nothing left to remove, so just return
if (worstIndex == -1)
return;
// remove that observation
removeObservation(worstIndex);
// update A
solver.removeRowFromA(worstIndex);
// solve for the parameters again
solver.solve(y, coef);
}
/**
* Removes an element from the observation matrix.
*
* @param index which element is to be removed
*/
private void removeObservation( int index ) {
final int N = y.numRows - 1;
final double[] d = y.data;
// shift
for (int i = index; i < N; i++) {
d[i] = d[i + 1];
}
y.numRows--;
}
}
</syntaxhighlight>
45667d748f5f731ebdcfac3bfc8b10aa2ef01941
Example Polynomial Roots
0
15
322
238
2021-07-07T15:31:20Z
Peter
1
wikitext
text/x-wiki
Eigenvalue decomposition can be used to find the roots in a polynomial by constructing the so called [http://en.wikipedia.org/wiki/Companion_matrix companion matrix]. While faster techniques do exist for root finding, this is one of the most stable and probably the easiest to implement.
Because the companion matrix is not symmetric a generalized eigenvalue [MatrixDecomposition decomposition] is needed. The roots of the polynomial may also be [http://en.wikipedia.org/wiki/Complex_number complex].
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.31/examples/src/org/ejml/example/PolynomialRootFinder.java PolynomialRootFinder.java source code]
= Example Code =
<syntaxhighlight lang="java">
public class PolynomialRootFinder {
/**
* <p>
* Given a set of polynomial coefficients, compute the roots of the polynomial. Depending on
* the polynomial being considered the roots may contain complex number. When complex numbers are
* present they will come in pairs of complex conjugates.
* </p>
*
* <p>
* Coefficients are ordered from least to most significant, e.g: y = c[0] + x*c[1] + x*x*c[2].
* </p>
*
* @param coefficients Coefficients of the polynomial.
* @return The roots of the polynomial
*/
public static Complex_F64[] findRoots( double... coefficients ) {
int N = coefficients.length - 1;
// Construct the companion matrix
DMatrixRMaj c = new DMatrixRMaj(N, N);
double a = coefficients[N];
for (int i = 0; i < N; i++) {
c.set(i, N - 1, -coefficients[i]/a);
}
for (int i = 1; i < N; i++) {
c.set(i, i - 1, 1);
}
// use generalized eigenvalue decomposition to find the roots
EigenDecomposition_F64<DMatrixRMaj> evd = DecompositionFactory_DDRM.eig(N, false);
evd.decompose(c);
Complex_F64[] roots = new Complex_F64[N];
for (int i = 0; i < N; i++) {
roots[i] = evd.getEigenvalue(i);
}
return roots;
}
}
</syntaxhighlight>
ece81f5095f1f243746e62b560558792ef98cdf8
Example Principal Component Analysis
0
13
323
236
2021-07-07T15:32:45Z
Peter
1
wikitext
text/x-wiki
Principal Component Analysis (PCA) is a popular and simple to implement classification technique, often used in face recognition. The following is an example of how to implement it in EJML using the procedural interface. It is assumed that the reader is already familiar with PCA.
External Resources
* [https://github.com/lessthanoptimal/ejml/blob/v0.41/examples/src/org/ejml/example/PrincipalComponentAnalysis.java PrincipalComponentAnalysis.java source code]
* [http://en.wikipedia.org/wiki/Principal_component_analysis General PCA information on Wikipedia]
= Sample Code =
<syntaxhighlight lang="java">
/**
* <p>
* The following is a simple example of how to perform basic principal component analysis in EJML.
* </p>
*
* <p>
* Principal Component Analysis (PCA) is typically used to develop a linear model for a set of data
* (e.g. face images) which can then be used to test for membership. PCA works by converting the
* set of data to a new basis that is a subspace of the original set. The subspace is selected
* to maximize information.
* </p>
* <p>
* PCA is typically derived as an eigenvalue problem. However in this implementation {@link org.ejml.interfaces.decomposition.SingularValueDecomposition SVD}
* is used instead because it will produce a more numerically stable solution. Computation using EVD requires explicitly
* computing the variance of each sample set. The variance is computed by squaring the residual, which can
* cause loss of precision.
* </p>
*
* <p>
* Usage:<br>
* 1) call setup()<br>
* 2) For each sample (e.g. an image ) call addSample()<br>
* 3) After all the samples have been added call computeBasis()<br>
* 4) Call sampleToEigenSpace() , eigenToSampleSpace() , errorMembership() , response()
* </p>
*
* @author Peter Abeles
*/
public class PrincipalComponentAnalysis {
// principal component subspace is stored in the rows
private DMatrixRMaj V_t;
// how many principal components are used
private int numComponents;
// where the data is stored
private DMatrixRMaj A = new DMatrixRMaj(1, 1);
private int sampleIndex;
// mean values of each element across all the samples
double[] mean;
/**
* Must be called before any other functions. Declares and sets up internal data structures.
*
* @param numSamples Number of samples that will be processed.
* @param sampleSize Number of elements in each sample.
*/
public void setup( int numSamples, int sampleSize ) {
mean = new double[sampleSize];
A.reshape(numSamples, sampleSize, false);
sampleIndex = 0;
numComponents = -1;
}
/**
* Adds a new sample of the raw data to internal data structure for later processing. All the samples
* must be added before computeBasis is called.
*
* @param sampleData Sample from original raw data.
*/
public void addSample( double[] sampleData ) {
if (A.getNumCols() != sampleData.length)
throw new IllegalArgumentException("Unexpected sample size");
if (sampleIndex >= A.getNumRows())
throw new IllegalArgumentException("Too many samples");
for (int i = 0; i < sampleData.length; i++) {
A.set(sampleIndex, i, sampleData[i]);
}
sampleIndex++;
}
/**
* Computes a basis (the principal components) from the most dominant eigenvectors.
*
* @param numComponents Number of vectors it will use to describe the data. Typically much
* smaller than the number of elements in the input vector.
*/
public void computeBasis( int numComponents ) {
if (numComponents > A.getNumCols())
throw new IllegalArgumentException("More components requested that the data's length.");
if (sampleIndex != A.getNumRows())
throw new IllegalArgumentException("Not all the data has been added");
if (numComponents > sampleIndex)
throw new IllegalArgumentException("More data needed to compute the desired number of components");
this.numComponents = numComponents;
// compute the mean of all the samples
for (int i = 0; i < A.getNumRows(); i++) {
for (int j = 0; j < mean.length; j++) {
mean[j] += A.get(i, j);
}
}
for (int j = 0; j < mean.length; j++) {
mean[j] /= A.getNumRows();
}
// subtract the mean from the original data
for (int i = 0; i < A.getNumRows(); i++) {
for (int j = 0; j < mean.length; j++) {
A.set(i, j, A.get(i, j) - mean[j]);
}
}
// Compute SVD and save time by not computing U
SingularValueDecomposition<DMatrixRMaj> svd =
DecompositionFactory_DDRM.svd(A.numRows, A.numCols, false, true, false);
if (!svd.decompose(A))
throw new RuntimeException("SVD failed");
V_t = svd.getV(null, true);
DMatrixRMaj W = svd.getW(null);
// Singular values are in an arbitrary order initially
SingularOps_DDRM.descendingOrder(null, false, W, V_t, true);
// strip off unneeded components and find the basis
V_t.reshape(numComponents, mean.length, true);
}
/**
* Returns a vector from the PCA's basis.
*
* @param which Which component's vector is to be returned.
* @return Vector from the PCA basis.
*/
public double[] getBasisVector( int which ) {
if (which < 0 || which >= numComponents)
throw new IllegalArgumentException("Invalid component");
DMatrixRMaj v = new DMatrixRMaj(1, A.numCols);
CommonOps_DDRM.extract(V_t, which, which + 1, 0, A.numCols, v, 0, 0);
return v.data;
}
/**
* Converts a vector from sample space into eigen space.
*
* @param sampleData Sample space data.
* @return Eigen space projection.
*/
public double[] sampleToEigenSpace( double[] sampleData ) {
if (sampleData.length != A.getNumCols())
throw new IllegalArgumentException("Unexpected sample length");
DMatrixRMaj mean = DMatrixRMaj.wrap(A.getNumCols(), 1, this.mean);
DMatrixRMaj s = new DMatrixRMaj(A.getNumCols(), 1, true, sampleData);
DMatrixRMaj r = new DMatrixRMaj(numComponents, 1);
CommonOps_DDRM.subtract(s, mean, s);
CommonOps_DDRM.mult(V_t, s, r);
return r.data;
}
/**
* Converts a vector from eigen space into sample space.
*
* @param eigenData Eigen space data.
* @return Sample space projection.
*/
public double[] eigenToSampleSpace( double[] eigenData ) {
if (eigenData.length != numComponents)
throw new IllegalArgumentException("Unexpected sample length");
DMatrixRMaj s = new DMatrixRMaj(A.getNumCols(), 1);
DMatrixRMaj r = DMatrixRMaj.wrap(numComponents, 1, eigenData);
CommonOps_DDRM.multTransA(V_t, r, s);
DMatrixRMaj mean = DMatrixRMaj.wrap(A.getNumCols(), 1, this.mean);
CommonOps_DDRM.add(s, mean, s);
return s.data;
}
/**
* <p>
* The membership error for a sample. If the error is less than a threshold then
* it can be considered a member. The threshold's value depends on the data set.
* </p>
* <p>
* The error is computed by projecting the sample into eigenspace then projecting
* it back into sample space and
* </p>
*
* @param sampleA The sample whose membership status is being considered.
* @return Its membership error.
*/
public double errorMembership( double[] sampleA ) {
double[] eig = sampleToEigenSpace(sampleA);
double[] reproj = eigenToSampleSpace(eig);
double total = 0;
for (int i = 0; i < reproj.length; i++) {
double d = sampleA[i] - reproj[i];
total += d*d;
}
return Math.sqrt(total);
}
/**
* Computes the dot product of each basis vector against the sample. Can be used as a measure
* for membership in the training sample set. High values correspond to a better fit.
*
* @param sample Sample of original data.
* @return Higher value indicates it is more likely to be a member of input dataset.
*/
public double response( double[] sample ) {
if (sample.length != A.numCols)
throw new IllegalArgumentException("Expected input vector to be in sample space");
DMatrixRMaj dots = new DMatrixRMaj(numComponents, 1);
DMatrixRMaj s = DMatrixRMaj.wrap(A.numCols, 1, sample);
CommonOps_DDRM.mult(V_t, s, dots);
return NormOps_DDRM.normF(dots);
}
}
</syntaxhighlight>
eb4ed8b0e434d8ab332ece67fea34aa9ac5ef1ec
Example Fixed Sized Matrices
0
17
324
283
2021-07-07T15:37:29Z
Peter
1
wikitext
text/x-wiki
Array access adds a significant amount of overhead to matrix operations. A fixed sized matrix gets around that issue by having each element in the matrix be a variable in the class. EJML provides support for fixed sized matrices and vectors up to 6x6, at which point it loses its advantage. The example below demonstrates how to use a fixed sized matrix and convert to other matrix types in EJML.
External Resources:
* [https://github.com/lessthanoptimal/ejml/blob/v0.41/examples/src/org/ejml/example/ExampleFixedSizedMatrix.java ExampleFixedSizedMatrix]
* <disqus>Discuss this example</disqus>
== Example ==
<syntaxhighlight lang="java">
/**
* In some applications a small fixed sized matrix can speed things up a lot, e.g. 8 times faster. One application
* which uses small matrices is graphics and rigid body motion, which extensively uses 3x3 and 4x4 matrices. This
* example is to show some examples of how you can use a fixed sized matrix.
*
* @author Peter Abeles
*/
public class ExampleFixedSizedMatrix {
public static void main( String[] args ) {
// declare the matrix
DMatrix3x3 a = new DMatrix3x3();
DMatrix3x3 b = new DMatrix3x3();
// Can assign values the usual way
for (int i = 0; i < 3; i++) {
for (int j = 0; j < 3; j++) {
a.set(i, j, i + j + 1);
}
}
// Direct manipulation of each value is the fastest way to assign/read values
a.a11 = 12;
a.a23 = 64;
// can print the usual way too
a.print();
// most of the standard operations are support
CommonOps_DDF3.transpose(a, b);
b.print();
System.out.println("Determinant = " + CommonOps_DDF3.det(a));
// matrix-vector operations are also supported
// Constructors for vectors and matrices can be used to initialize its value
DMatrix3 v = new DMatrix3(1, 2, 3);
DMatrix3 result = new DMatrix3();
CommonOps_DDF3.mult(a, v, result);
// Conversion into DMatrixRMaj can also be done
DMatrixRMaj dm = DConvertMatrixStruct.convert(a, null);
dm.print();
// This can be useful if you need do more advanced operations
SimpleMatrix sv = SimpleMatrix.wrap(dm).svd().getV();
// can then convert it back into a fixed matrix
DMatrix3x3 fv = DConvertMatrixStruct.convert(sv.getDDRM(), (DMatrix3x3)null);
System.out.println("Original simple matrix and converted fixed matrix");
sv.print();
fv.print();
}
}
</syntaxhighlight>
3db023e1461470faabf6fc182f062c97a7a1ac4e
Main Page
0
1
325
307
2021-07-07T15:58:20Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating real/complex/dense/sparse matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for small and large, dense and sparse, real and complex matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.41''
|-
| '''Date:''' ''July 7, 2021''
|-
| [https://github.com/lessthanoptimal/ejml/blob/master/convert_to_ejml31.py v0.31 Upgrade Script]
|-
| [https://github.com/lessthanoptimal/ejml/blob/v0.41/change.txt Change Log]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|-
| [[Kotlin|Kotlin]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== News 2021 ==
{| width="500pt" |
| -
|
* Read and write EJML in Matlab format with [https://github.com/HebiRobotics/MFL MFL] from HEBI Robotics
* Graph BLAS continues to be flushed out with masks being added to latest SNAPSHOT
* Concurrency/threading has been added to some operations
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
** Matrix 2x2 to 6x6
** Vector 2 to 6
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
* Sparse Real
** Compressed Column
| style="vertical-align:top;" |
* Full support for floats and doubles
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
</center>
9ce1a4512273599a0f8aeef830b16992a5723872
328
325
2022-12-05T03:27:02Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating real/complex/dense/sparse matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for small and large, dense and sparse, real and complex matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.41.1''
|-
| '''Date:''' ''December 4, 2022''
|-
| [https://github.com/lessthanoptimal/ejml/blob/master/convert_to_ejml31.py v0.31 Upgrade Script]
|-
| [https://github.com/lessthanoptimal/ejml/blob/v0.41.1/change.txt Change Log]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|-
| [[Kotlin|Kotlin]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== News 2021 ==
{| width="500pt" |
| -
|
* Read and write EJML in Matlab format with [https://github.com/HebiRobotics/MFL MFL] from HEBI Robotics
* Graph BLAS continues to be flushed out with masks being added to latest SNAPSHOT
* Concurrency/threading has been added to some operations
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
** Matrix 2x2 to 6x6
** Vector 2 to 6
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
* Sparse Real
** Compressed Column
| style="vertical-align:top;" |
* Full support for floats and doubles
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
</center>
b7baf4d0e6b2d28203e7b9591333ceb778b2d631
331
328
2023-02-10T16:36:21Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating real/complex/dense/sparse matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for small and large, dense and sparse, real and complex matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.42''
|-
| '''Date:''' ''February 10, 2023''
|-
| [https://github.com/lessthanoptimal/ejml/blob/master/convert_to_ejml31.py v0.31 Upgrade Script]
|-
| [https://github.com/lessthanoptimal/ejml/blob/v0.42/change.txt Change Log]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|-
| [[Kotlin|Kotlin]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== News 2021 ==
{| width="500pt" |
| -
|
* Read and write EJML in Matlab format with [https://github.com/HebiRobotics/MFL MFL] from HEBI Robotics
* Graph BLAS continues to be flushed out with masks being added to latest SNAPSHOT
* Concurrency/threading has been added to some operations
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
** Matrix 2x2 to 6x6
** Vector 2 to 6
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
* Sparse Real
** Compressed Column
| style="vertical-align:top;" |
* Full support for floats and doubles
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
</center>
87e8fdaf84e9c3896233b065cea3cb0e8986626b
332
331
2023-02-10T16:39:42Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating real/complex/dense/sparse matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for small and large, dense and sparse, real and complex matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.42''
|-
| '''Date:''' ''February 10, 2023''
|-
| [https://github.com/lessthanoptimal/ejml/blob/master/convert_to_ejml31.py v0.31 Upgrade Script]
|-
| [https://github.com/lessthanoptimal/ejml/blob/v0.42/change.txt Change Log]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|-
| [[Kotlin|Kotlin]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== News 2023 ==
{| width="500pt" |
| -
|
* SimpleMatrix is going through bit of a rejuvenation
* SimpleMatrix has much improved support for complex matrices
* Introduced ConstMatrix for when you want to restrict access to read only
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
** Matrix 2x2 to 6x6
** Vector 2 to 6
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
* Sparse Real
** Compressed Column
| style="vertical-align:top;" |
* Full support for floats and doubles
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
</center>
da329fb64d055a7e473d87b4bf8413d4ccdf246c
334
332
2023-04-15T15:46:10Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating real/complex/dense/sparse matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for small and large, dense and sparse, real and complex matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.43''
|-
| '''Date:''' ''April 15, 2023''
|-
| [https://github.com/lessthanoptimal/ejml/blob/master/convert_to_ejml31.py v0.31 Upgrade Script]
|-
| [https://github.com/lessthanoptimal/ejml/blob/v0.43/change.txt Change Log]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|-
| [[Kotlin|Kotlin]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== News 2023 ==
{| width="500pt" |
| -
|
* SimpleMatrix is going through bit of a rejuvenation
* SimpleMatrix has much improved support for complex matrices
* Introduced ConstMatrix for when you want to restrict access to read only
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
** Matrix 2x2 to 6x6
** Vector 2 to 6
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
* Sparse Real
** Compressed Column
| style="vertical-align:top;" |
* Full support for floats and doubles
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
</center>
ee7c730776ee305cc0bdba8274724e02b9484884
339
334
2023-09-24T04:33:04Z
Peter
1
wikitext
text/x-wiki
__NOTOC__
<center>
{| style="width:640pt;"
| align="center" |
[[File:Ejml_logo.gif]]
|-
|
Efficient Java Matrix Library (EJML) is a [http://en.wikipedia.org/wiki/Linear_algebra linear algebra] library for manipulating real/complex/dense/sparse matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for small and large, dense and sparse, real and complex matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.
EJML has three distinct ways to interact with it: 1) ''procedural'', 2) ''SimpleMatrix'', and 3) ''Equations''. ''Procedure'' provides all capabilities of EJML and almost complete control over memory creation, speed, and specific algorithms. ''SimpleMatrix'' provides a simplified subset of the core capabilities in an easy to use flow styled object-oriented API, inspired by [http://math.nist.gov/javanumerics/jama/ Jama]. ''Equations'' is a symbolic interface, similar in spirit to [http://www.mathworks.com/products/matlab/ Matlab] and other [http://en.wikipedia.org/wiki/Computer_algebra_system CAS], that provides a compact way of writing equations.
|}
{|
| colspan="3" align="center" |
{|style="font-size:120%; text-align:left;"
|-
| '''Version:''' ''v0.43.1''
|-
| '''Date:''' ''September 23, 2023''
|-
| [https://github.com/lessthanoptimal/ejml/blob/master/convert_to_ejml31.py v0.31 Upgrade Script]
|-
| [https://github.com/lessthanoptimal/ejml/blob/v0.43.1/change.txt Change Log]
|}
|- valign="top"
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Download|Download]]
|-
| [[manual|Manual]]
|-
| [http://ejml.org/javadoc/ JavaDoc]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [http://groups.google.com/group/efficient-java-matrix-library-discuss Message Board]
|-
| [https://github.com/lessthanoptimal/ejml/issues Bug Reports]
|-
| [[Frequently Asked Questions|FAQ]]
|-
| [[Kotlin|Kotlin]]
|}
| width="220pt" |
{| width="200pt" border="1" align="center" style="font-size:120%; text-align:center; border-collapse:collapse; background-color:#ffffee;"
|-
| [[Acknowledgments|Acknowledgments]]
|-
| [[Performance|Performance]]
|-
| [[Users|Users]]
|}
|}
== News 2023 ==
{| width="500pt" |
| -
|
* SimpleMatrix is going through bit of a rejuvenation
* SimpleMatrix has much improved support for complex matrices
* Introduced ConstMatrix for when you want to restrict access to read only
|}
== Code Examples ==
Demonstrations on how to compute the Kalman gain "K" using each interface in EJML.
{| width="500pt" |
|-
|
'''Procedural'''
<syntaxhighlight lang="java">
mult(H,P,c);
multTransB(c,H,S);
addEquals(S,R);
if( !invert(S,S_inv) )
throw new RuntimeException("Invert failed");
multTransA(H,S_inv,d);
mult(P,d,K);
</syntaxhighlight>
'''SimpleMatrix'''
<syntaxhighlight lang="java">
SimpleMatrix S = H.mult(P).mult(H.transpose()).plus(R);
SimpleMatrix K = P.mult(H.transpose().mult(S.invert()));
</syntaxhighlight>
'''Equations'''
<syntaxhighlight lang="java">
eq.process("K = P*H'*inv( H*P*H' + R )");
</syntaxhighlight>
|}
== Functionality ==
{| class="wikitable" width="650pt" border="1" |
! Data Structures || Operations
|-
| style="vertical-align:top;" |
* Fixed Sized
** Matrix 2x2 to 6x6
** Vector 2 to 6
* Dense Real
** Row-major
** Block
* Dense Complex
** Row-major
* Sparse Real
** Compressed Column
| style="vertical-align:top;" |
* Full support for floats and doubles
* Basic Operators (addition, multiplication, ... )
* Matrix Manipulation (extract, insert, combine, ... )
* Linear Solvers (linear, least squares, incremental, ... )
* Decompositions (LU, QR, Cholesky, SVD, Eigenvalue, ...)
* Matrix Features (rank, symmetric, definitiveness, ... )
* Random Matrices (covariance, orthogonal, symmetric, ... )
* Unit Testing
|}.
{| class="wikitable" width="650pt" border="1" |
! style="width: 40%;" | Decomposition || style="width: 15%;" |Dense Real || style="width: 15%;" |Dense Complex || style="width: 15%;" |Sparse Real || style="width: 15%;" |Sparse Complex
|-
| LU || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LL || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| Cholesky LDL || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| QR || style="text-align:center;" | X || style="text-align:center;" | X || style="text-align:center;" | X ||
|-
| QRP || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| SVD || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-Symmetric || style="text-align:center;" | X || style="text-align:center;" | || ||
|-
| Eigen-General || style="text-align:center;" | X || style="text-align:center;" | || ||
|}
Support for floats (32-bit) and doubles (64-bit) is available. Sparse matrix support is only available for basic operations at this time.
</center>
afb700130218e12cff4093b7ca0155c2d6f01a35
Download
0
6
326
293
2021-07-07T15:58:48Z
Peter
1
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
Current status of developmental code:
[https://travis-ci.org/lessthanoptimal/ejml https://api.travis-ci.org/lessthanoptimal/ejml.png]
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.40/ EJML Downloads]
== Gradle and Maven ==
EJML is broken up into several packages (see list below) and including each individually can be tedious. To include all the packages simply reference "all", as is shown below:
Gradle:
<syntaxhighlight lang="groovy">
compile group: 'org.ejml', name: 'ejml-all', version: '0.41'
</syntaxhighlight>
Maven:
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>ejml-all</artifactId>
<version>0.41</version>
</dependency>
</syntaxhighlight>
Individual modules:
{| class="wikitable"
! Module Name !! Description
|-
| ejml-all || All the modules
|-
| ejml-ddense || Dense Real Double Matrices
|-
| ejml-fdense || Dense Real Float Matrices
|-
| ejml-zdense || Dense Complex Double Matrices
|-
| ejml-cdense || Dense Complex Float Matrices
|-
| ejml-simple || SimpleMatrix and Equations
|-
| ejml-dsparse || Sparse Real Double Matrices
|-
| ejml-fsparse || Sparse Real Float Matrices
|}
98b46d1927cc51b778ed2122d2c6bfbf8f33984c
327
326
2021-07-07T16:23:08Z
Peter
1
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
Current status of developmental code:
[https://travis-ci.org/lessthanoptimal/ejml https://api.travis-ci.org/lessthanoptimal/ejml.png]
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.41/ EJML Downloads]
== Gradle and Maven ==
EJML is broken up into several packages (see list below) and including each individually can be tedious. To include all the packages simply reference "all", as is shown below:
Gradle:
<syntaxhighlight lang="groovy">
compile group: 'org.ejml', name: 'ejml-all', version: '0.41'
</syntaxhighlight>
Maven:
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>ejml-all</artifactId>
<version>0.41</version>
</dependency>
</syntaxhighlight>
Individual modules:
{| class="wikitable"
! Module Name !! Description
|-
| ejml-all || All the modules
|-
| ejml-ddense || Dense Real Double Matrices
|-
| ejml-fdense || Dense Real Float Matrices
|-
| ejml-zdense || Dense Complex Double Matrices
|-
| ejml-cdense || Dense Complex Float Matrices
|-
| ejml-simple || SimpleMatrix and Equations
|-
| ejml-dsparse || Sparse Real Double Matrices
|-
| ejml-fsparse || Sparse Real Float Matrices
|}
d69f428f8cf2b1fcef1082f5ebe49bcde79af5a1
333
327
2023-02-10T17:26:41Z
Peter
1
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
Current status of developmental code:
[https://travis-ci.org/lessthanoptimal/ejml https://api.travis-ci.org/lessthanoptimal/ejml.png]
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.42/ EJML Downloads]
== Gradle and Maven ==
EJML is broken up into several packages (see list below) and including each individually can be tedious. To include all the packages simply reference "all", as is shown below:
Gradle:
<syntaxhighlight lang="groovy">
compile group: 'org.ejml', name: 'ejml-all', version: '0.42'
</syntaxhighlight>
Maven:
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>ejml-all</artifactId>
<version>0.42</version>
</dependency>
</syntaxhighlight>
Individual modules:
{| class="wikitable"
! Module Name !! Description
|-
| ejml-all || All the modules
|-
| ejml-ddense || Dense Real Double Matrices
|-
| ejml-fdense || Dense Real Float Matrices
|-
| ejml-zdense || Dense Complex Double Matrices
|-
| ejml-cdense || Dense Complex Float Matrices
|-
| ejml-simple || SimpleMatrix and Equations
|-
| ejml-dsparse || Sparse Real Double Matrices
|-
| ejml-fsparse || Sparse Real Float Matrices
|}
10384686510b596eceaf06e8606994cdc11d3d98
335
333
2023-04-15T15:46:52Z
Peter
1
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
Current status of developmental code:
[https://travis-ci.org/lessthanoptimal/ejml https://api.travis-ci.org/lessthanoptimal/ejml.png]
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.43/ EJML Downloads]
== Gradle and Maven ==
EJML is broken up into several packages (see list below) and including each individually can be tedious. To include all the packages simply reference "all", as is shown below:
Gradle:
<syntaxhighlight lang="groovy">
compile group: 'org.ejml', name: 'ejml-all', version: '0.43'
</syntaxhighlight>
Maven:
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>ejml-all</artifactId>
<version>0.43</version>
</dependency>
</syntaxhighlight>
Individual modules:
{| class="wikitable"
! Module Name !! Description
|-
| ejml-all || All the modules
|-
| ejml-ddense || Dense Real Double Matrices
|-
| ejml-fdense || Dense Real Float Matrices
|-
| ejml-zdense || Dense Complex Double Matrices
|-
| ejml-cdense || Dense Complex Float Matrices
|-
| ejml-simple || SimpleMatrix and Equations
|-
| ejml-dsparse || Sparse Real Double Matrices
|-
| ejml-fsparse || Sparse Real Float Matrices
|}
8907f1e5707ff486fbbaeb105c1da727da3dbac1
336
335
2023-04-15T15:51:57Z
Peter
1
wikitext
text/x-wiki
== Source Code ==
Source code is hosted on Github. There you can access the absolute bleeding edge code. Most of the time it is in an usable state, but not always!
[https://github.com/lessthanoptimal/ejml https://github.com/lessthanoptimal/ejml]
The command to clone it is:
<syntaxhighlight lang="groovy">
git clone https://github.com/lessthanoptimal/ejml.git
</syntaxhighlight>
Current status of developmental code:
[https://github.com/lessthanoptimal/ejml/actions/workflows/gradle.yml https://github.com/lessthanoptimal/ejml/actions/workflows/gradle.yml/badge.svg]
== Download ==
Jars of the latest stable release can be found on Source Forge using the following link: [https://sourceforge.net/projects/ejml/files/v0.43/ EJML Downloads]
== Gradle and Maven ==
EJML is broken up into several packages (see list below) and including each individually can be tedious. To include all the packages simply reference "all", as is shown below:
Gradle:
<syntaxhighlight lang="groovy">
compile group: 'org.ejml', name: 'ejml-all', version: '0.43'
</syntaxhighlight>
Maven:
<syntaxhighlight lang="xml">
<dependency>
<groupId>org.ejml</groupId>
<artifactId>ejml-all</artifactId>
<version>0.43</version>
</dependency>
</syntaxhighlight>
Individual modules:
{| class="wikitable"
! Module Name !! Description
|-
| ejml-all || All the modules
|-
| ejml-ddense || Dense Real Double Matrices
|-
| ejml-fdense || Dense Real Float Matrices
|-
| ejml-zdense || Dense Complex Double Matrices
|-
| ejml-cdense || Dense Complex Float Matrices
|-
| ejml-simple || SimpleMatrix and Equations
|-
| ejml-dsparse || Sparse Real Double Matrices
|-
| ejml-fsparse || Sparse Real Float Matrices
|}
060f5e6cf4715b622f7f54c3a2b5f8eb41002b33